OpenAI has announced that it will be implementing new watermarks for AI-generated images in an effort to combat deepfakes and enhance authenticity. The company will be adding the Coalition for Content Provenance and Authenticity (C2PA) watermarks to images created using DALL-E3 language models. These watermarks will contain metadata that can verify the image’s authenticity and determine whether it was generated using AI.
The watermarks will apply to images created using ChatGPT and other services and apps that rely on OpenAI’s API. Users will notice a CR symbol and an invisible metadata component in the top left corner of the image, indicating that it has been generated using AI. OpenAI has confirmed that this new feature will be implemented starting February 12 and will be turned on by default, without the option to turn it off or remove the watermark and metadata.
By adopting the C2PA watermarks, OpenAI joins other companies in their commitment to transparency and combating the misuse of AI-generated content. The watermark system allows users to identify the AI tool used to create the image and provides details about its origin. OpenAI assures users that the new watermarks will not affect image generation performance or cause any latency issues.
The implementation of these watermarks is a significant step in improving the accountability and trustworthiness of AI-generated images. With the growing prevalence of deepfakes, it has become crucial to have mechanisms in place to verify the authenticity of visuals. OpenAI’s decision to adopt the C2PA watermarks aligns with the industry’s efforts to address these concerns and safeguard against the misuse of AI technology.
Overall, OpenAI’s move to add C2PA watermarks to AI-generated images demonstrates their commitment to responsible AI usage and the protection of individuals and brands from the harmful effects of deepfakes. By implementing these watermarks, OpenAI aims to enhance transparency and preserve the integrity of visual content in the age of AI.