The Beatles recently amazed their global fanbase by using artificial intelligence (AI) to release a new song. This innovative approach involved combining parts of an old recording while enhancing audio quality. However, while there is joy surrounding this achievement, there is also a dark side to using AI known as deepfakes.
Deepfakes involve the creation of fraudulent voices and images using AI technology. Although deepfake tools are not yet well-developed or widespread, the potential for them to be used in fraudulent activities is high. Unfortunately, this technology is rapidly advancing.
Open AI recently showcased an Audio API model capable of generating human speech and voice input text. This software is currently the closest to resembling real human speech. While the current form of the Open AI model cannot create deepfake voices, it serves as a testament to the rapid progress of voice generation technologies.
Presently, there are no devices capable of producing high-quality deepfake voices that are indistinguishable from real human speech. However, in recent months, more tools have been introduced to generate human voices. These tools are becoming increasingly user-friendly, and in the near future, we can expect to see models that combine simplicity with impressive results.
Although the use of AI for fraud is uncommon, there have already been instances of successful cases. For instance, venture capitalist Tim Draper warned his Twitter followers in mid-October 2023 about scammers using his voice for fraudulent purposes. He explained that these voice requests for money were the result of increasingly sophisticated AI technology.
Currently, society may not view voice deepfakes as a significant cyber threat since there are few reported cases of malicious intent. As a result, the development of protection technologies has been slow.
For now, the best defense is to carefully listen to the caller’s voice on the telephone. If the recording’s quality is poor, contains noise, or sounds robotic, it is wise not to trust the information being conveyed. Another effective method is to ask unexpected questions that would challenge the authenticity of the caller. A voice model, for example, would struggle to answer a question about its favorite color, as this is not typically asked in fraud attempts. Even if the attacker manually plays back a prerecorded answer, the time delay in the response will expose the ruse.
Installing a reliable and comprehensive security solution is another smart approach. While these solutions cannot detect deepfake voices with 100% accuracy, they can help users avoid suspicious websites, prevent unauthorized payments, and protect against malware by safeguarding web browsers and checking all computer files.
Commenting on this issue, Dmitry Anikin, Senior Data Scientist at Kaspersky, emphasizes the importance of not overstating the threat and trying to identify deepfake voices where they do not exist. He acknowledges that the current available technology is unlikely to create a voice so realistic that humans cannot recognize it as artificial. However, he advises individuals to be aware of potential threats and to prepare for advanced deepfake fraud potentially becoming a new reality in the near future.
In conclusion, while AI has provided opportunities for delightful creations like The Beatles’ new song, it also comes with the risks associated with deepfakes. As technology continues to advance, there is a growing need for protection against fraudulent uses of AI. By remaining vigilant, asking unique questions, and utilizing reliable security solutions, individuals can better protect themselves from deepfake fraud.