Facebook and Instagram, under the umbrella of Meta, have announced new measures to combat the spread of misinformation by labeling fake AI-generated images. The tech giant, which already labels content created by its Imagine AI engine, plans to extend this labeling to third-party sources such as OpenAI and Google.
Meta’s labeling system will include visible watermarks on AI-generated images, similar to the existing watermarking on content produced by Imagine AI. The design of these labels is yet to be finalized but may feature the words AI Info alongside generated content. Other platforms are also exploring ways to identify invisible markers in images generated by third-party AI systems.
While the focus has been on labeling images, concerns remain about the detection of AI-generated audio and video. Meta acknowledges that current technology lacks the ability to detect such content at the same level as images. However, the industry is working towards developing these capabilities. In the meantime, Meta plans to rely on user disclosure, requiring individuals to state if their video or audio files have been produced or edited by AI. Failure to do so will result in penalties, and if a piece of media is particularly realistic, a more prominent label with key details will be attached.
Meta is also investing in improving its own first-party tools. The company’s AI Research lab FAIR is developing a new watermarking technology called Stable Signature, intended to prevent the removal of invisible markers from AI-generated content metadata. Additionally, Meta is training several LLMs (Large Language Models) on their Community Standards to help identify policy violations.
The rollout of social media labels is expected in the coming months, with a particular focus on the year 2024, a significant election year for many countries. Meta aims to minimize the spread of misinformation across its platforms during this critical period.
Though specific penalties for inadequate marking of posts are yet to be disclosed, Meta has emphasized its intention to enforce consequences. Furthermore, it is unclear whether images from third-party sources will be marked with visible watermarks, as the exact design and implementation are yet to be finalized.
In summary, Meta’s efforts to label and disclose AI-generated content on Facebook, Instagram, and Threads aim to uphold online transparency and combat the spread of misinformation. By introducing visible labels and exploring the identification of invisible markers, the company seeks to promote accountability among users and improve detection capabilities. The development of Stable Signature and the training of LLMs demonstrate Meta’s commitment to enhancing their first-party tools. As the 2024 election year approaches, the implementation of these measures becomes increasingly important to safeguard the integrity of online information.