Deepfake Threat Grows: Urgent Need for Regulations and Personal Rights Protection
The prevalence of deepfakes, compelling and AI-generated videos or audio recordings, is increasing at an alarming rate. This surge can be attributed to the growing accessibility of deepfake technology and its application in various domains including entertainment, political manipulation, and fraudulent activities. As a result, experts are calling for strengthened data protection and privacy laws to limit the collection and use of personal data for deepfake creation without explicit consent.
Deepfakes pose a significant threat not only to individuals but also to organizations. They can be used in phishing attacks to convince employees to compromise security measures, putting sensitive data at risk. Additionally, deepfake technology can be weaponized to create deceptive content that threatens national security. It can manipulate public sentiment, create forged videos of political leaders, and potentially incite chaos or conflicts.
To address the growing risks associated with deepfakes, organizations are advised to invest in cybersecurity measures, employee training, and awareness programs. Implementing monitoring and incident response plans is crucial to mitigate potential security breaches caused by deepfakes.
Legal measures are also being taken to protect individuals from the misuse of their reputation and goodwill through deepfake technology. The case of actor Anil Kapoor seeking protection of his personality rights sets a precedent for regulations in this space. Freedom of speech and expression cannot be exercised at the cost of others’ reputation, and personal lives should not be encroached upon. Efforts are underway to crack down on fake news, and similar treatment is expected to be afforded to AI deepfakes and memes.
The recent deepfake video of actor Rashmika Mandana going viral on social media highlights the urgent need for a legal and regulatory framework to address deepfakes in the country. Preserving personality rights and curbing the misuse of AI tools to portray public figures in fictional scenarios is of utmost importance.
This emerging scenario may lead to the development of specific laws and regulations governing AI-generated content and memes, potentially impacting online speech and creative expression. Balancing the right to freedom of expression with the protection of individual rights is crucial.
In India, those affected by AI-generated deepfakes are encouraged to file first information reports (FIRs) at the nearest police stations to avail the remedies provided under the Information Technology (IT) Rules, 2021 and the Indian Penal Code (IPC). Online platforms have a legal obligation to prevent the spread of misinformation and must remove such content within 36 hours upon receiving a report.
As technology advances, it is essential to have robust regulations and personal rights protection in place to combat the growing threat of deepfakes. A comprehensive legal framework will help safeguard privacy, individual rights, and national security in the face of AI-generated content and its potential misuse.