YouTube’s Latest Update: AI-Generated Content Disclosures Required
Video-sharing platform YouTube is taking a stand against AI-generated content by mandating that users disclose whether their videos were created or modified using generative AI technology. The new rule applies specifically to videos featuring realistic-looking content, with exceptions made for obviously unrealistic or animated content, as well as videos where AI was used solely for production assistance.
According to YouTube, these disclosures will mostly appear in the description box below the videos. However, for content touching on sensitive topics like health, news, politics, and finance, a more prominent warning will be displayed directly on the video itself.
The platform acknowledges that generative AI has revolutionized the creative process for many creators, but it also recognizes the growing need for transparency regarding the authenticity of the content viewers consume. Failure to disclose AI-generated alterations or content will result in penalties, including content removal and suspension from the YouTube Partner Program. In some instances, YouTube may even add a label to a video without user disclosure, particularly if the content has the potential to mislead viewers.
These new labels for AI-generated content will be gradually implemented over the coming weeks, starting with the mobile app and eventually expanding to desktop and TV versions. Additionally, YouTube is enhancing its protections against mimics, such as AI-generated music content that imitates an artist’s voice or face. Music partners will now have the ability to request the removal of such content.
YouTube emphasizes that this process will be continually refined, with input from creators, as they strive to strike a balance between AI-driven creativity and transparency. The platform hopes that increased transparency will foster a deeper appreciation for the ways in which AI fuels human creativity.