OpenAI Takes Steps to Combat Deepfakes with Watermarked AI-Generated Images
In an effort to address the growing concerns surrounding AI-generated deepfakes and misinformation, OpenAI has announced a new measure to promote transparency. Images created using ChatGPT or the DALL-E 3 technology will now feature watermarks, indicating that they were generated by artificial intelligence.
The watermark will include detailed information about when the image was created and will also display the logo of the Coalition for Content Provenance and Authenticity (C2PA). However, OpenAI acknowledges that this addition is not a foolproof solution to address issues of provenance. The company points out that the metadata can easily be removed either intentionally or accidentally. Most social media platforms already remove metadata from uploaded images, and actions like taking a screenshot can also eliminate it. Therefore, images lacking this metadata may or may not have been generated using ChatGPT or OpenAI’s API.
It is important to note that this change will only affect AI-generated images and will not impact AI-generated voice or text. While the inclusion of watermarks may slightly impact the file sizes of these images, it will not compromise their quality.
OpenAI plans to roll out this change to mobile users on February 12, 2024.
Generative AI technology has made significant advancements, allowing users to generate lifelike images simply by providing text-based prompts. AI tools like Image Creator from Designer (formerly Bing Image Creator) and ChatGPT have continuously improved over time, pushing the boundaries of image generation capabilities.
However, there have been instances where such tools were misused to create offensive and explicit content. Microsoft’s Image Creator from Designer, which incorporates OpenAI’s DALL-E 3 technology, faced backlash when it was discovered that the tool was used to generate explicit deepfake images of pop star Taylor Swift. In response, Microsoft implemented stricter censorship measures to prevent misuse, but some users felt that these measures limited the tool’s capabilities.
OpenAI’s decision to introduce watermarks aims to increase transparency and ensure that people are aware when an image has been generated by AI. While it is not a foolproof solution, it serves as a step towards addressing issues of provenance associated with AI-generated content.
As the prevalence of deepfakes and misinformation continues to challenge the online landscape, finding effective solutions is crucial. OpenAI’s implementation of watermarked AI-generated images is one such effort to promote transparency and deter the misuse of AI technology in creating deceptive content.