OpenAI's ChatGPT faces data exfiltration concerns, posing risks to users. Insufficient fixes and inaccuracies raise questions about user safety and data protection.
OpenAI's ChatGPT security patch fails to resolve data leak risk, leaving vulnerabilities that could compromise users' sensitive information. Custom GPTs feature still exposes data, raising concerns for privacy and security. Further improvements needed to ensure user safety.
A Northwestern University study reveals a security vulnerability in custom GPT programs created by OpenAI, potentially leading to data leaks. The research highlights the risks of prompt extraction and file leakage, with a high success rate in exploiting the vulnerability. Prompt injection attacks have also become a growing concern. The study hopes to prompt the AI community to develop stronger safeguards to balance innovation and security in AI technologies.
Generative AI technology like ChatGPT has sparked concerns about its potential for nefarious purposes. Cyber criminals share tools to recreate malware. AI-enhanced attacks can be combated with threat intelligence sharing and good security practices.
ChatGPT, an AI chatbot, can be tricked into creating code used for malicious software applications. Cybersecurity experts have identified this potential risk and its potential to facilitate criminal activities. As a result, G7 leaders are now addressing the need for appropriate regulations in order to protect society against misuse of this technology. Makoto Miwa, NIST professor and developer of ChatGPT, warned against the security vulnerability and highlighted the need for an international discussion on regulation.
Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?