Artificial intelligence experts and industry executives, led by trailblazer Yoshua Bengio, are calling for increased regulation of deepfakes in an open letter. The group highlights the risks posed by AI-generated content, such as fraud, political disinformation, and sexual imagery. With advancements in AI technology, deepfakes have become increasingly indistinguishable from real human-created content, raising concerns about their impact on society.
The letter, titled Disrupting the Deepfake Supply Chain, outlines recommendations for regulating deepfakes. These recommendations include criminalizing deepfake child pornography, imposing penalties on individuals creating or spreading harmful deepfakes, and requiring AI companies to prevent the creation of harmful content.
Over 400 individuals from various industries, including academia, entertainment, and politics, have already signed the letter. Notable signatories include Harvard psychology professor Steven Pinker, former Estonian presidents, researchers from Google DeepMind, and OpenAI.
Regulating AI systems to prevent harm has been a focus for regulators, particularly since the unveiling of ChatGPT by OpenAI. The technology, which mimics human-like conversation, has raised concerns about the potential risks associated with advanced AI models.
Prominent figures such as Elon Musk have previously called for a pause in developing powerful AI systems, emphasizing the need for responsible AI use. The push for greater regulation in the creation of deepfakes reflects a broader effort to address the risks posed by AI advancements.
In conclusion, the call for more regulation around deepfakes underscores the importance of ethical AI development and responsible use of technology to safeguard society from potential harm.