Title: Deepfake Scammers Face Prosecution, Says Justice Minister
Justice Minister Paul Goldsmith has stated that perpetrators of deepfake sex scams can be prosecuted under existing laws, emphasizing that the misuse of artificial intelligence (AI) poses risks in the form of sophisticated and dangerous scams. According to cybersecurity agency Cert NZ, the advancement of AI technology enables scammers to replicate voices, natural language, and even create realistic videos, leading to an increase in online fraud.
Addressing the recent misuse of AI-related scams involving TVNZ presenters Wendy Petrie and Simon Dallow, Cert NZ’s incident response team leader Tom Roberts advised individuals to immediately report such instances to the platform they appear on. Roberts highlighted the importance of common sense when encountering suspicious advertisements, encouraging individuals to question whether a celebrity would genuinely endorse the advertised product.
Roberts further emphasized that AI scams still rely on social triggers, exploiting factors like urgency and the fear of missing out (Fomo). He urged the public to be skeptical of any offers and to verify their authenticity by checking the official page of the celebrity in question. Additionally, Cert NZ reported an increase in scam calls, including a recently detected fake Visa racket originating from Australian phone numbers.
In response to these evolving cybersecurity threats, Roberts advised individuals to hang up if they suspect a call may be a scam and to call the organization back using the official phone number. He stressed the importance of promptly reporting any incidents where personal details have been compromised to Cert NZ and the relevant organization, particularly if it involves bank account information.
Justice Minister Goldsmith highlighted the challenges posed by AI and deepfakes. He noted that technological advancements have enabled the use of photo or video content without the subject’s consent, making deepfakes particularly harmful, especially when they involve sexual content. Goldsmith clarified that current legislation, such as the Crimes Act and the Harmful Digital Communications Act 2015, already covers offenses related to intimate visual recordings. Posting deepfakes of a sexual nature without consent is considered an offense under the existing legislation.
In conclusion, the misuse of AI technology presents significant challenges in combating scams and safeguarding individuals’ privacy. Existing laws provide a legal framework to prosecute offenders involved in deepfake sex scams, underscoring the need for further measures to address the growing threat of AI-related fraud.