Shane Jones, a principal software engineering manager at Microsoft’s AI division, has raised concerns about the potential harm caused by the company’s AI image generator, Copilot Designer. Jones sent a letter to the Federal Trade Commission (FTC) urging an investigation into Microsoft’s AI incident reporting procedures. According to Jones, Copilot Designer produces what he describes as harmful content, which includes inappropriate and sexually objectified images of women, as well as images of teenagers engaging in illicit activities.
Jones discovered a vulnerability in OpenAI’s DALL-E 3, which allows users to bypass content restrictions and produce harmful images. Despite reporting the vulnerability and recommending that OpenAI suspend DALL-E 3, Microsoft demanded that Jones remove his public letter. The same vulnerability in DALL-E 3 affects Copilot Designer since it uses this technology to generate images.
While some argue that regulating morality in AI-generated content is challenging due to cultural and individual differences in defining harmful material, Jones is advocating for transparency from Microsoft regarding AI risks. He is not calling for the tools to be taken down but rather for an independent review of Microsoft’s AI incident reporting processes and disclosure of risks to users, especially since Copilot Designer is marketed to children.
In conclusion, Jones suggests changing Copilot Designer’s rating on the Android app from E to Everyone to Mature 17+ to reflect the potential risks associated with its content. This issue raises questions about the responsibility of companies developing AI technology and the importance of transparency and accountability in the digital age.