A potential privacy threat has been uncovered in OpenAI’s powerful language model, GPT-3.5 Turbo, according to a recent study conducted by Indiana University Bloomington researcher Rui Zhu. Zhu successfully extracted personal data using the model, raising concerns about the privacy safeguards of AI tools like GPT-3.5 Turbo. In an experiment, Zhu obtained email addresses of individuals, including a journalist from The New York Times, with the help of the model. The study revealed that the model accurately provided work addresses of 80% of tested Times employees, highlighting the possibility of sensitive information being disclosed without modification. This discovery comes after reports of the compromise and sale of over 10,000 ChatGPT accounts earlier this year. OpenAI has emphasized its commitment to safety and resistance against requests for private data. However, critics have called for increased transparency and improved safeguards to protect personal information stored by AI models. Experts warn that commercially available models do not offer reliable ways to protect privacy and constantly integrate diverse data sources, posing significant risks to users’ privacy.
OpenAI’s Language Model Raises Privacy Concerns with Personal Data Extraction., US
Date:
Frequently Asked Questions (FAQs) Related to the Above News
Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.