An artificial intelligence engineer at Microsoft says that the company’s AI image generator, Copilot Designer, has been generating inappropriate images that violate the company’s policy.
According to Shane Jones — who has been testing Copilot Designer for vulnerabilities since its March 2023 launch — the AI image generator can be manipulated into creating images depicting violent sexual images of women, drug use, teenagers with weapons, images relating to abortion, and graphic images depicting demons and monsters. The Microsoft engineer says these generated images violate Microsoft’s responsible AI principles.
Jones’s findings have prompted concerns over the lack of regulation of generative AI technologies, especially given that the Copilot Designer’s Android app remains rated as “E for Everyone,” suggesting suitability for users of all ages. The AI engineer says he started internally reporting his experiences in December, but Microsoft was unwilling to heed his advice to withdraw the product from the market. Instead, the company referred him to OpenAI, the tech that powers Copilot Designer.
After hearing nothing back, Jones published an open letter on LinkedIn, pleading with the start-up to remove DALL-E 3, the latest version of their AI model. In addition, Jones wrote to US senators detailing his concerns with generative AI technology and met with staffers from the Senate Committee on Commerce, Science, and Transportation.
This past Wednesday, Jones sent letters to Federal Trade Commission (FTC) Chair Lina Khan and Microsoft’s board of directors, again urging the tech giant to pull Copilot Designer until better safeguards could be implemented. He also called for the company to add product disclosures and change the app’s rating on Google Play Store to indicate its content is for mature audiences only.