Humans struggle to detect deepfake speech, new research reveals

Date:

Humans Struggle to Detect Deepfake Speech, New Research Reveals

New research conducted by UCL has revealed that humans face challenges when it comes to detecting artificially generated speech, with an accuracy rate of only 73%. Surprisingly, the accuracy remains consistent whether the speech is in English or Mandarin.

Deepfakes are a type of synthetic media designed to mimic a person’s voice or appearance. They fall under the category of generative artificial intelligence (AI) and employ machine learning algorithms to recreate original sound or imagery by learning patterns and characteristics from datasets of real individuals.

In the past, deepfake speech algorithms required a large number of samples to generate authentic audio. However, the latest pre-trained algorithms can now generate a person’s voice using just a three-second clip of them speaking. Open-source algorithms are readily available, and with some expertise, individuals can train them within a few days.

Even tech giant Apple recently introduced software that allows users to create a replica of their own voice using only 15 minutes of recordings.

To investigate the detection capabilities of humans when it comes to differentiating between real and fake speech, researchers at UCL employed a text-to-speech (TTS) algorithm trained on two publicly available datasets, one in English and the other in Mandarin. The algorithm was used to generate 50 deepfake speech samples in each language, distinct from the training data, to prevent the algorithm from reproducing the original input.

These artificially generated samples, along with genuine samples, were played for 529 participants to evaluate their ability to identify authentic speech. The results revealed that participants could only recognize fake speech with a mere 73% accuracy, which improved only slightly after receiving training to recognize aspects of deepfake speech.

See also  AI Revolution: Open-Source Platform A-SOiD Unleashes Predictive Power from Video

Kimberly Mai, first author of the study and affiliated with UCL Computer Science, commented on the findings saying, Our research confirms that humans are unable to consistently detect deepfake speech, regardless of whether they have received training to identify artificial content. It is worth noting that the samples used in our study were created using relatively old algorithms, raising concerns about human detection capabilities when faced with deepfake speech generated using the most advanced technology available today and in the future.

Moving forward, the next step for researchers is to develop more sophisticated automated speech detectors. This ongoing effort aims to create detection capabilities to counteract the threat posed by artificially generated audio and imagery.

While generative AI audio technology offers benefits such as improving accessibility for those with limited speech abilities or those suffering from voice loss due to illness, there are growing concerns about criminals and nation states exploiting such technology to cause harm.

Instances of deepfake speech being used by criminals have been documented, including a case in 2019 where the CEO of a British energy company fell victim to a deepfake recording of his boss’s voice, which convinced him to transfer large sums of money to a false supplier.

Professor Lewis Griffin, senior author of the study and affiliated with UCL Computer Science, highlighted the need for governments and organizations to devise strategies to address potential misuse of generative artificial intelligence tools. Nonetheless, the professor also acknowledged the positive possibilities on the horizon, urging the recognition of the benefits that lie ahead alongside these risks.

See also  Nvidia Faces Antitrust Probes in EU & France Over AI Chip Market Dominance

As generative artificial intelligence technology advances and its tools become more accessible, a careful balance must be struck to maximize its advantages while mitigating potential abuse. It is crucial to strike a balance between harnessing the positive potential and protecting individuals and societies from the negative consequences.

In conclusion, deepfake speech remains a concerning and challenging phenomenon. The ability of artificial intelligence algorithms to generate remarkably realistic audio poses significant risks that demand urgent attention. Researchers continue to work towards improving detection capabilities, but a comprehensive strategy involving both technological advancements and societal awareness is essential to effectively navigate this evolving landscape of artificial media.

Frequently Asked Questions (FAQs) Related to the Above News

What is deepfake speech?

Deepfake speech refers to artificially generated audio that mimics a person's voice using generative artificial intelligence algorithms. It can recreate the sound and pattern of speech by learning from datasets of real individuals.

How accurate are humans at detecting deepfake speech?

According to the research conducted by UCL, humans have an accuracy rate of only 73% when it comes to detecting deepfake speech. This accuracy remains consistent whether the speech is in English or Mandarin.

How do deepfake speech algorithms generate authentic audio?

In the past, deepfake speech algorithms required a large number of samples to generate authentic audio. However, with the latest pre-trained algorithms, it is now possible to generate a person's voice using just a three-second clip of them speaking. Open-source algorithms are readily available, and individuals can train them within a few days with some expertise.

What did the UCL research reveal about human detection capabilities of deepfake speech?

The UCL research revealed that participants could only recognize fake speech with a mere 73% accuracy. Even after receiving training to recognize aspects of deepfake speech, their ability to detect it only slightly improved.

What are the implications of the research findings?

The research findings raise concerns about human detection capabilities when faced with deepfake speech generated using advanced technology. As generative AI audio technology becomes more accessible, there is a need to develop sophisticated automated speech detectors to counteract the threat posed by artificially generated audio and imagery.

What are the potential risks associated with deepfake speech?

Concerns have been raised about criminals and nation states exploiting deepfake speech technology for malicious purposes. Instances of deepfake speech being used to deceive individuals and commit crimes, such as fraudulent money transfers, have already been documented.

What are the positive possibilities of generative AI audio technology?

Generative AI audio technology offers benefits such as improving accessibility for individuals with limited speech abilities or those suffering from voice loss due to illness. It has the potential to enhance communication and contribute to positive advancements in various fields.

How can the misuse of generative AI tools be addressed?

The study emphasizes the need for governments and organizations to devise strategies to address potential misuse of generative AI tools. This includes developing more advanced detection capabilities and raising societal awareness about the risks and implications of deepfake speech.

What is the importance of striking a balance between the positive potential and negative consequences of generative AI audio technology?

It is crucial to strike a balance between harnessing the positive potential of generative AI audio technology while protecting individuals and societies from the negative consequences. A comprehensive strategy involving both technological advancements and societal awareness is necessary to navigate the evolving landscape of artificial media effectively.

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

OpenAI Faces Security Concerns with Mac ChatGPT App & Internal Data Breach

OpenAI faces security concerns with Mac ChatGPT app and internal data breach, highlighting the need for robust cybersecurity measures.

Former US Marine in Moscow Orchestrates Deepfake Disinformation Campaign

Former US Marine orchestrates deepfake disinformation campaign from Moscow. Uncover the truth behind AI-generated fake news now.

Kashmiri Student Achieves AI Milestone at Top Global Conference

Kashmiri student achieves AI milestone at top global conference, graduating from world's first AI research university. Join him on his journey!

Bittensor Network Hit by $8M Token Theft Amid Rising Crypto Hacks and Exploits

Bittensor Network faces $8M token theft in latest cyber attack. Learn how crypto hacks are evolving in the industry.