OpenAI, in collaboration with Meta, has announced plans to add watermarks to AI-generated images in an effort to address issues of provenance and authenticity. The watermarks will be applied to images generated by OpenAI’s DALL-E 3 model and the ChatGPT website, starting today. Mobile users will see the watermarks from February 12th onwards.
The watermarks will be in the form of a visible CR symbol, located in the top left corner of the generated images. In addition, there will be an invisible watermark embedded in the metadata of the images. This combined approach aims to provide greater transparency and traceability to AI-generated content.
OpenAI states that adding the watermark metadata will not significantly impact latency or the quality of image generation. However, it is worth noting that file sizes may increase by 3% to 32% as a result. While the watermarks and metadata act as indicators of authenticity, OpenAI acknowledges that they are not foolproof and can be removed intentionally or accidentally.
Though the watermarks will be automatically stripped when images are uploaded to social media platforms, they can still be circumvented by taking screenshots. Similarly, users can edit or crop the visible watermarks themselves. OpenAI emphasizes that the responsibility lies with individual users to exercise sound judgment when dealing with AI-generated content on social media.
The addition of watermarks is part of OpenAI’s participation in the Content Authenticity Initiative (C2PA) and their commitment to addressing issues such as deepfake videos and images. While it may not provide an infallible solution, the aim is to provide users and viewers with more information about the origin and authenticity of AI-generated content.
Overall, these efforts by OpenAI and Meta to label and watermark AI-generated images signify an ongoing commitment to promoting transparency, accountability, and trust in the digital space. As advancements in AI continue, it is crucial to develop measures that enable users to make informed decisions and distinguish between real and manipulated content.