Major Breakthrough: Australian Researchers Develop Non-Invasive AI Translating Silent Thoughts into Text

Date:

Australian Researchers Develop Non-Invasive AI Cap That Translates Thoughts to Text

Australian researchers have made a groundbreaking breakthrough in the field of artificial intelligence (AI) with the development of a non-invasive technology that can translate silent thoughts into text. This remarkable achievement only requires users to wear a snug-fitting cap, according to the team known as DeWaves.

The researchers conducted tests on over two dozen subjects, who silently read wearing the cap. The cap, equipped with an electroencephalogram (EEG), recorded the participants’ brain waves, which were then decoded into text. This innovation holds immense potential to aid patients with stroke and paralysis in communicating, making it easier to direct machines such as robots and bionic arms.

Chin-Teng Lin, a computer scientist from the University of Technology Sydney (UTS), described the research as pioneering in its ability to translate raw EEG waves directly into language. This breakthrough marks a significant milestone in the field.

Although DeWave achieved an accuracy level of over 40% based on one of the two metric sets in the experiment, there has been a 3% improvement compared to previous EEG thought translations. The researchers aim to further enhance accuracy levels to reach approximately 90%, putting it on par with traditional language translation and speech recognition methods and software.

Unlike other methods that involve invasive surgical procedures or the use of expensive and bulky MRI machines, DeWave’s non-invasive approach makes it practical for everyday use. Traditional methods often require eye-tracking to decode brain signals into words.

Decoding EEG waves into words without relying on eye-tracking is more challenging because the brain waves of different individuals do not always represent word breaks in the same way. Teaching AI to interpret individual thoughts becomes tricky due to this variation.

DeWave uses discrete encoding techniques in its brain-to-text translation process, creating an innovative neural decoding approach. The technology encodes EEG waves into a code that can be matched with specific words based on their proximity in DeWave’s codebook.

The research team incorporated large language models to enhance the technology’s performance. By combining the BERT and GPT systems and using datasets of individuals with recorded eye-tracking and brain waves while reading, they were able to match brain wave patterns with words. An open-source large language model was also employed to further train the technology in constructing sentences.

DeWave’s translation capabilities excel in verbs, while nouns are often translated using words of similar meaning rather than exact translations. Yiqin Duan, a computer scientist from UTS, explained that this phenomenon is due to the brain’s semantic processing of words, with similar words leading to similar patterns of brain waves. Despite the challenges, meaningful results have been obtained from this research.

See also  ChatGPT: Exploring the Possibilities of Stock Forecasting on Wall Street

The study, currently available on a preprint server, presented its findings during the NeurIPS 2023 conference. The relatively large sample used in the test addressed the substantial variations in EEG wave distributions among individuals, making the research more reliable than previous studies with smaller sample sizes.

This groundbreaking development opens up new possibilities in the fields of AI and neuroscience. With further refinement, DeWave has the potential to revolutionize communication for individuals with disabilities and lead to significant advancements in human-machine interactions.

As the future of this technology unfolds, researchers anticipate a wide range of applications, including improving the quality of life for patients with communication impairments and enabling seamless communication between humans and machines. The possibilities are endless, and the non-invasive nature of DeWave’s AI cap brings us one step closer to a world where thoughts can be translated into words effortlessly.

AI Mind-Reader? Non-Invasive Artificial Intelligence Can Translate Thoughts to Text Through a Cap For Wearing

Australian researchers were able to develop a novel non-invasive AI that is capable of translating silent thoughts into text. What’s more, it just requires its users to wear a snug-fitting cap in order to do so.

The researchers behind the development, known as DeWaves, put the process into test via data that was taken from over two dozen subjects.

As part of the procedure, participants silently read with a cap on. This cap documented their brain waves through an EEG, or electroencephalogram. This was then decoded into text. With more refinement, the technology could aid patients with stroke and paralysis with communication. This could make it easier to direct machines, such as robots or bionic arms.

Chin-Teng Lin, a computer scientist from the UTS (University of Technology Sydney) explains that the research serves as a pioneering effort when it comes to translating raw waves of EEG into language directly. This marks a crucial breakthrough in the general field.

While DeWave was only able to achieve an accuracy level of more than 40% based on one out of the two metric sets in the conducted experiment, this is nevertheless a 3% improvement compared to earlier EEG thought translations. The researchers aim to boost accuracy levels until they reach roughly 90%, making it on par with typical language translation or speech recognition methods and software.

See also  South Africa Emerges as Africa's Digital Quality Leader Despite Lag in Cyber Security

Other ways to translate brain waves into language involve invasive surgical methods that include electrode implantation or the use of expensive and bulky MRI machines. This makes them virtually impractical for regular and daily use. These typically require eye-tracking to translate brain signals into chunks of words.

When the eyes of a person focus on one word and then moves to another, it is safe to assume that the brain has a short break between the processing of individual words. Raw translation of EEG waves into words, without the aid of eye tracking, is more difficult.

The brain waves of various individuals do not always represent word breaks in a similar fashion. This makes it hard to teach AI regarding the interpretation of individual thoughts.

With great training, the encode of DeWave translates EEG waves to a code that is capable of being matched to certain words. This is based on their closeness to entries within the codebook of DeWave. Lin explains that the technology is the first to include discrete encoding techniques that are discrete in the process of brain-to-text translation. This introduces an innovative neural decoding approach.

Lin adds that the integration of large language models also opens new doors in the worlds of AI and neuroscience.

The team utilized language models involving a mix of a system known as BERT with GPT. They then tested it using datasets of individuals with eye tracking and brain waves recorded as they were reading text. This aided the technology in matching the patterns of brain waves with words. The technology was then further trained using an open-source large language model that, in essence, makes sentences with words.

DeWave performed best when it came to translating verbs. However, when it comes to nouns, the translation typically involves words with the same meaning rather than exact translations.

Yiqin Duan, the study’s first author and a computer scientist from the UTS, explains that they believe this is due to how the brain has a semantic way of processing words, wherein similar words could lead to similar patterns of brain waves.While the technology faces challenges, meaningful results are yielded from it.

The test involving the relatively large sample addresses the fact that the EEG wave distributions of individuals could greatly vary. This suggests that the research could be more reliable compared to earlier technologies that only had small sample tests.

The study is currently available on a preprint server, while the findings were relayed during the NeurIPS 2023 conference.

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

KAUST Faculty Awarded Google Grants for AI Research in Saudi Arabia

KAUST faculty receive Google grants for AI research in Saudi Arabia. Join forces to advance multilingual, multimodal machine learning with LLMs.

KAUST Faculty Receive Google Grants for AI Research in Saudi Arabia

KAUST faculty receive Google grants for AI research in Saudi Arabia. Join forces to advance multilingual, multimodal machine learning with LLMs.

Tether’s AI Division Aims to Revolutionize Industry Standards

Tether's AI Division revolutionizes industry standards with decentralized models, enhancing privacy and system resilience.

Tether’s New AI Division Set to Revolutionize Industry Standards

Tether's AI Division revolutionizes industry standards with decentralized models, enhancing privacy and system resilience.