Meta, formerly known as Facebook, has announced plans to start labeling AI-generated images produced by companies like OpenAI, Google, Adobe, Microsoft, Shutterstock, and Midjourney. This move aims to address the potential harms associated with generative AI technologies that can create fake but realistic content. By establishing a system of standards, tech companies hope to mitigate the dissemination of misleading or harmful materials.
The labeling system for AI-generated images follows a similar template utilized by these companies over the past decade to remove banned content, such as mass violence and child exploitation, from their platforms. Meta’s Vice President of Global Affairs, Nick Clegg, expressed confidence in the reliability of labeling AI-generated images but acknowledged that labeling audio and video content remains more complex and is still being developed.
To encourage the industry as a whole to adopt these labeling practices, Meta plans to lead by example. In the interim, Meta will require individuals to label their own modified audio and video content. Penalties will be imposed on those who fail to comply, although the specific repercussions were not disclosed.
Clegg noted that labeling written text generated by AI tools like ChatGPT currently lacks a viable mechanism. It remains uncertain whether Meta will apply labels to generative AI content shared on its encrypted messaging service, WhatsApp.
Meta’s independent oversight board recently criticized the company’s policy on misleadingly doctored videos, deeming it too narrow. The board recommended labeling rather than removing such content. Clegg agrees with the board’s perspective, acknowledging that Meta’s existing policy is inadequate in an environment increasingly filled with synthetic and hybrid content.
Clegg cited the recent labeling partnership as evidence that Meta is aligning with the board’s proposed direction. Meta intends to prioritize the labeling of AI-generated content to ensure transparency and help users distinguish between real and artificially created materials.
The news of Meta’s labeling initiative further demonstrates the commitment of major tech companies to address the challenges posed by generative AI technologies. By adopting standardized practices, these companies aim to protect users from the potential harmful consequences of misinformation and misleading content.