Meta announced that it will start labeling AI-generated media in May to address concerns over deepfakes on its platforms. The social media giant stated that it will not remove manipulated images and audio but will label them to provide transparency and context without infringing on freedom of speech.
The move comes after the tech giant’s oversight board criticized Meta’s approach to manipulated media. The board highlighted the need to address the increasing threat of AI-generated deepfakes spreading disinformation, especially during crucial election periods.
Meta’s new Made with AI labels will identify content created or altered with AI technology, including videos, audio, and images. Additionally, a more prominent label will be used for content deemed highly misleading to the public.
This initiative aligns with an agreement reached in February among major tech companies to combat manipulated content that aims to deceive voters. The use of a common watermarking standard will help identify AI-generated content, although there may still be limitations with some open-source software.
The rollout of AI-generated content labeling will begin in May 2024, with the removal of manipulated media based solely on the old policy ending in July. Content manipulated with AI will only be removed if it violates other platform rules, such as hate speech or voter interference.
Recent incidents involving convincing AI deepfakes, like the manipulated video of US President Joe Biden, have raised concerns about the widespread use of this technology for deceptive purposes. The oversight board’s recommendations, including increased transparency and context for manipulated media, aim to address these growing challenges.
In conclusion, Meta’s decision to label AI-generated content is a step towards combating the spread of deepfakes and disinformation on social media platforms. By providing greater transparency and context, the company aims to protect users from misleading content while upholding principles of free speech and expression.