Meta, formerly known as Facebook, will soon be implementing new measures to combat manipulated media on its platforms. Starting in May, Meta will start labeling a broader range of content as Made with AI on Facebook and Instagram. This move comes as a response to feedback from Meta’s independent Oversight Board, which recommended updates to the current policy.
The labeling process will be facilitated through self-disclosure by users when they post content, advice from fact-checkers, or detection of indicators of AI-generated content by Meta. This initiative aims to address the rising concerns surrounding deepfakes and other manipulated media that could potentially mislead viewers, especially in the midst of an election year.
Experts have expressed worries about the malicious use of AI tools to create deepfakes, which could manipulate and deceive voters. Meta’s decision to label AI-generated content is a step towards combating this issue and increasing transparency on its platforms. The company’s focus on addressing manipulation through the identification of AI-generated content reflects the evolving landscape of technology and the need to stay ahead of misleading information.
Indian Prime Minister Narendra Modi recently highlighted the challenges posed by AI and emphasized the importance of marking AI-generated content with watermarks to prevent misinformation. Bill Gates, co-founder of Microsoft, echoed these sentiments, acknowledging the opportunities and challenges presented by AI technology. As AI continues to advance, it is crucial for platforms like Meta to stay vigilant and proactive in combating the spread of manipulated media.
Overall, Meta’s decision to label AI-generated content is a step in the right direction towards ensuring a more transparent and trustworthy online environment. By implementing these measures, Meta aims to protect users from misleading information and uphold the integrity of its platforms.