Meta to Label AI-Generated Media, Crack Down on Deepfakes

Date:

Meta announced that it will start labeling AI-generated media in May to address concerns over deepfakes on its platforms. The social media giant stated that it will not remove manipulated images and audio but will label them to provide transparency and context without infringing on freedom of speech.

The move comes after the tech giant’s oversight board criticized Meta’s approach to manipulated media. The board highlighted the need to address the increasing threat of AI-generated deepfakes spreading disinformation, especially during crucial election periods.

Meta’s new Made with AI labels will identify content created or altered with AI technology, including videos, audio, and images. Additionally, a more prominent label will be used for content deemed highly misleading to the public.

This initiative aligns with an agreement reached in February among major tech companies to combat manipulated content that aims to deceive voters. The use of a common watermarking standard will help identify AI-generated content, although there may still be limitations with some open-source software.

The rollout of AI-generated content labeling will begin in May 2024, with the removal of manipulated media based solely on the old policy ending in July. Content manipulated with AI will only be removed if it violates other platform rules, such as hate speech or voter interference.

Recent incidents involving convincing AI deepfakes, like the manipulated video of US President Joe Biden, have raised concerns about the widespread use of this technology for deceptive purposes. The oversight board’s recommendations, including increased transparency and context for manipulated media, aim to address these growing challenges.

See also  Meta Powers Up AI Marketing Tools for First AI-Powered Holiday Season

In conclusion, Meta’s decision to label AI-generated content is a step towards combating the spread of deepfakes and disinformation on social media platforms. By providing greater transparency and context, the company aims to protect users from misleading content while upholding principles of free speech and expression.

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Advait Gupta
Advait Gupta
Advait is our expert writer and manager for the Artificial Intelligence category. His passion for AI research and its advancements drives him to deliver in-depth articles that explore the frontiers of this rapidly evolving field. Advait's articles delve into the latest breakthroughs, trends, and ethical considerations, keeping readers at the forefront of AI knowledge.

Share post:

Subscribe

Popular

More like this
Related

Power Elites Pursuing Immortality: A Modern Frankenstein Unveiled

Exploring the intersection of AI and immortality through a modern lens, as power elites pursue godlike status in a technological age.

Tech Giants Warn of AI Risks in SEC Filings

Tech giants like Microsoft, Google, Meta, and NVIDIA warn of AI risks in SEC filings. Companies acknowledge challenges and emphasize responsible management.

HealthEquity Data Breach Exposes Customers’ Health Info – Latest Cyberattack News

Stay updated on the latest cyberattack news as HealthEquity's data breach exposes customers' health info - a reminder to prioritize cybersecurity.

Young Leaders Urged to Harness AI for Global Progress

Experts urging youth to harness AI for global progress & challenges. Learn how responsible AI implementation can drive innovation.