Delhi Police Requests Meta’s Help to Identify Account Sharing Rashmika Mandanna Deep Fake Video, India

Date:

Delhi Police Seeks Meta’s Help to Identify Deep Fake Video of Actress Rashmika Mandanna

The Delhi Police has reached out to Meta, the parent company of Facebook, seeking assistance in identifying the account responsible for sharing a deep fake video featuring popular actress Rashmika Mandanna. The move comes after the police registered a First Information Report (FIR) related to the incident.

In their communication to Meta, the Delhi Police has requested the URL of the account that shared the fabricated video on social media. Additionally, they have also asked for information regarding the individuals who further spread the fake video through various platforms.

An officer involved in the investigation stated, We have written to Meta to access the URL ID of the account from which the video was generated. The FIR has been filed under sections 465 and 469 of the Indian Penal Code, which pertain to forgery, and sections 66C and 66E of the Information Technology Act, which address cybercrime offenses. The Special Cell’s Intelligence Fusion and Strategic Operations Unit of the Delhi Police is actively pursuing the case with a dedicated team of officers.

The Delhi Commission for Women has also taken notice of the video and has issued a notice to the city police, urging them to take appropriate action against the perpetrators. The deep fake video featuring Rashmika Mandanna, suspected to have been created using Artificial Intelligence, gained significant attention on social media last week. The original video belonged to a British-Indian influencer, whose face was digitally replaced with Mandanna’s.

Efforts are underway to trace those responsible for the creation and dissemination of the video. With the assistance of Meta, the Delhi Police hopes to swiftly crack the case and bring the culprits to justice. This incident serves as a stark reminder of the challenges posed by deep fake technology and the need for stricter measures against the misuse of AI in manipulating digital content.

See also  YouTube Unveils New AI-Integrated Features for Enhanced Creativity & Content Creation

In a world where fake news and misinformation spread rapidly, incidents like these highlight the importance of vigilance and robust mechanisms to combat the proliferation of misleading content. The collaboration between law enforcement agencies and tech giants is crucial in tackling such instances and ensuring the safety and reputation of individuals targeted by deep fake manipulation.

As investigations progress, it is expected that more details will emerge, shedding light on the motives behind the creation and circulation of the deep fake video. The case of Rashmika Mandanna’s deep fake serves as an unfortunate reminder of the potential harm that can be inflicted through the misuse of technology and the importance of effective legal frameworks to deter such acts in the future.

The Delhi Police’s proactive approach in seeking assistance from Meta demonstrates their commitment to presenting a strong case against those responsible for fabricating and circulating the video. Public interest remains high, with citizens eagerly awaiting the resolution of this disturbing incident.

Frequently Asked Questions (FAQs) Related to the Above News

What is the reason behind Delhi Police's request for Meta's help?

The Delhi Police has reached out to Meta, the parent company of Facebook, to seek assistance in identifying the account responsible for sharing a deep fake video featuring actress Rashmika Mandanna.

What law has the FIR been filed under?

The FIR has been filed under sections 465 and 469 of the Indian Penal Code, which pertain to forgery, and sections 66C and 66E of the Information Technology Act, addressing cybercrime offenses.

Why is the Delhi Commission for Women involved in this case?

The Delhi Commission for Women has issued a notice to the city police, urging them to take appropriate action against the perpetrators of the deep fake video. They are advocating for justice to be served in this case.

How was the deep fake video created?

The deep fake video is suspected to have been created using Artificial Intelligence. The original video belonged to a British-Indian influencer, whose face was digitally replaced with Rashmika Mandanna's.

What are the challenges presented by deep fake technology?

Deep fake technology poses challenges in terms of spreading misinformation and manipulating digital content. It raises concerns about the potential harm that can be inflicted on individuals targeted by these manipulations.

What is the significance of collaboration between law enforcement agencies and tech giants in cases like these?

Collaboration between law enforcement agencies and tech giants is crucial in addressing incidents of deep fake manipulation. It helps in tracing the responsible individuals and ensuring the safety and reputation of those targeted by such acts.

What importance does this case highlight regarding the need for robust mechanisms against misleading content?

This case highlights the importance of vigilance and robust mechanisms to combat the spread of misleading content. It emphasizes the need for stricter measures to prevent the misuse of AI in manipulating digital content.

What can be expected as investigations progress?

As investigations progress, more details may emerge regarding the motives behind the creation and circulation of the deep fake video. The resolution of this case is eagerly anticipated by the public.

How does this incident reinforce the necessity for effective legal frameworks?

This incident reinforces the importance of effective legal frameworks to deter and punish acts of deep fake manipulation. It underlines the potential harm that can be inflicted through the misuse of technology and the need for accountability.

How does the Delhi Police's request for Meta's help demonstrate their approach to this case?

The Delhi Police's proactive approach in seeking assistance from Meta demonstrates their commitment to presenting a strong case against those responsible for fabricating and circulating the deep fake video.

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Canadian Intelligence Chief Warns Against TikTok Use, Cites Chinese Data Threat

Canadian Intelligence Chief warns against TikTok due to Chinese data threat. Stay informed on privacy and security risks.

EU Demands Microsoft’s Internal Data on Generative AI Risks + Fines Threatened

EU demands Microsoft's internal data on generative AI risks for Bing. Fines threatened for non-compliance. Will Microsoft comply?

OpenAI Faces Departures of Top Safety Experts Amid Concerns of Neglecting Safety Measures

OpenAI faces departures of top safety experts amid concerns of neglecting safety measures, raising questions about AI development.

African Media Urged to Embrace AI for Growth: President Akufo-Addo’s Call at AMC

President Akufo-Addo urges African media to embrace AI for growth at AMC, emphasizing ethical use and environmental awareness.