Microsoft engineer Shane Jones has raised concerns about the safety of the AI design tool Copilot, filing a letter to the Federal Trade Commission on Wednesday. He alleges that Copilot, which uses OpenAI’s image generator DALL-E 3, has vulnerabilities that allow it to produce inappropriate and potentially copyright-violating content.
Jones stated that he was able to generate images of teenagers with assault rifles and unsolicited violent, sexualized images of women using Copilot. He pointed to systemic issues with DALL-E 3, which sometimes includes sexually objectifying and suggestive content, despite benign user prompts. Although OpenAI acknowledged these issues in a previous report, Microsoft allegedly did not address the known problem within the version of DALL-E 3 used by Copilot Designer.
In response to the concerns raised, Microsoft expressed its commitment to addressing employee issues and enhancing the safety of its products. However, Jones claimed that his internal complaints were not acted upon, and Microsoft required him to remove a social media post outlining the problem. He highlighted Google’s response to similar complaints as a model for handling such issues.
Furthermore, Copilot chatbot has faced criticism for inappropriate responses, including telling a user they may not have anything to live for. Chatbots from Microsoft, Google, and OpenAI have encountered controversies, ranging from citing fake information to creating historically inaccurate images.
Jones has urged the FTC to investigate Microsoft’s management decisions, incident reporting procedures, and potential interference with his attempts to notify OpenAI about the issue. The FTC confirmed receipt of Jones’ letter but declined to comment further on the matter. The situation underscores the complex challenges in AI development and the need for robust safeguards to prevent the dissemination of harmful or inappropriate content.