TikTok has announced that it will be utilizing new technology to identify and label AI-generated images and videos uploaded to the platform. This move comes as a response to concerns raised by researchers about the potential use of AI-generated content for misinformation purposes, especially in the context of upcoming U.S. elections.
The technology, known as Content Credentials, involves a digital watermark that indicates how images were created and edited. Spearheaded by Adobe, the Content Credentials technology is now being adopted by TikTok and other companies, including OpenAI, the creator of ChatGPT. YouTube and Meta Platforms have also expressed their intention to use Content Credentials to label AI-generated content.
In order for the system to work effectively, both the creator of the AI tool used to generate content and the platform distributing the content must agree to use the industry standard. For example, if a user generates an image using OpenAI’s DALL-E tool and uploads it to TikTok, the image will be automatically labeled as AI-generated.
TikTok, which boasts 170 million users in the United States, has already been proactive in labeling AI-generated content made within the app itself. However, the platform will now extend this labeling to content generated externally. Adam Presser, head of operations and trust and safety at TikTok, emphasized that they have policies in place to remove any realistic AI-generated content that is not properly labeled.
With concerns about AI-generated content on the rise, platforms like TikTok are taking steps to ensure transparency and accountability. By adopting technologies like Content Credentials, these platforms aim to empower users to make informed decisions about the content they consume while also safeguarding against potential misuse of AI-generated material.