AI Tool Steals Passwords by Listening to Keystrokes with 95% Accuracy, United States

Date:

AI Tool Can Steal Passwords with 95% Accuracy by Listening to Keystrokes

Researchers from Cornell University have discovered a new AI-driven attack that can steal passwords with remarkable accuracy by simply listening to the sound of keystrokes. According to a research paper, an integrated microphone on a nearby phone can reproduce keystrokes with up to 95% accuracy on a MacBook Pro, without the need for a large language model.

The team utilized an AI model trained on the waveform, intensity, and timing of each keystroke to identify and reproduce them. The AI model took into account individual typing styles, including slight variations in the timing of key presses. During tests using Zoom and Skype, the accuracy dropped slightly to 93% and 91.7%, respectively.

To execute this attack, malware would need to be installed on a phone or nearby device with a microphone. The malware would then gather data from a target’s keystrokes and feed it into an AI model by listening via the microphone. The researchers utilized an AI image classifier called CoAtNet, training the model on 36 keystrokes on a MacBook Pro pressed 25 times each.

While some suggestions to mitigate this type of attack include leveraging biometric authentication features like Windows Hello and Touch ID or adopting a reputable password manager, it’s worth noting that even the best keyboards can fall victim to this attack. The quietness of a keyboard has little impact on the accuracy of the attack.

This AI-driven attack is just the latest in a series of new threat vectors enabled by AI tools. The FBI recently warned about the use of AI-powered chatbots in launching criminal campaigns, and security researchers have encountered adaptive malware that can quickly adapt using tools like ChatGPT.

See also  Malware in ChatGPT Extensions and Apps

It’s important to remain vigilant and explore robust security measures to protect sensitive information. Whether it’s adopting advanced authentication methods or relying on artificial intelligence to detect and prevent such attacks, taking proactive steps is crucial in safeguarding personal data in the digital age.

Frequently Asked Questions (FAQs) Related to the Above News

How does the AI-driven attack work?

The attack involves installing malware on a phone or nearby device with a microphone. The malware gathers data from a target's keystrokes by listening via the microphone and then feeds that data into an AI model. The AI model has been trained to identify and reproduce keystrokes with remarkable accuracy.

What accuracy rates were achieved in reproducing keystrokes?

The research paper states that the AI model can reproduce keystrokes with up to 95% accuracy on a MacBook Pro when utilizing an integrated microphone on a nearby phone. During tests using Zoom and Skype, the accuracy dropped slightly to 93% and 91.7%, respectively.

Can this attack be mitigated by using a quieter keyboard?

No, the quietness of a keyboard has little impact on the accuracy of this type of attack. The attack relies on listening to the sound of the keystrokes rather than detecting physical vibrations or other keyboard-related factors.

What are some suggested measures to mitigate this AI-driven attack?

Suggestions to mitigate this type of attack include leveraging biometric authentication features like Windows Hello and Touch ID, or adopting a reputable password manager. It's important to use strong and unique passwords and follow best practices for cybersecurity to minimize the risk of such attacks.

Are there other AI-powered threats similar to this attack?

Yes, there are several other AI-powered threats emerging. The FBI has recently warned about the use of AI-powered chatbots in launching criminal campaigns. Additionally, security researchers have encountered adaptive malware that can quickly adapt using tools like ChatGPT. It's crucial to remain vigilant and explore robust security measures to protect sensitive information in the face of these evolving threats.

What should individuals do to safeguard their personal data?

Individuals should take proactive steps to protect their personal data in the digital age. This can include adopting advanced authentication methods like biometrics, using reliable password managers, and staying informed about the latest cybersecurity trends. It's also advisable to regularly update software and devices, use strong and unique passwords, and exercise caution when interacting with unfamiliar online sources.

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Hacker Breaches OpenAI, Exposing ChatGPT Designs: Cybersecurity Expert Warns of Growing Threats

Protect your AI technology from hackers! Cybersecurity expert warns of growing threats after OpenAI breach exposes ChatGPT designs.

AI Privacy Nightmares: Microsoft & OpenAI Exposed Storing Data

Stay informed about AI privacy nightmares with Microsoft & OpenAI exposed storing data. Protect your data with vigilant security measures.

Breaking News: Cloudflare Launches Tool to Block AI Crawlers, Protecting Website Content

Protect your website content from AI crawlers with Cloudflare's new tool, AIndependence. Safeguard your work in a single click.

OpenAI Breach Reveals AI Tech Theft Risk

OpenAI breach underscores AI tech theft risk. Tighter security measures needed to prevent future breaches in AI companies.