In recent weeks, ChatGPT has experienced another bout of unfortunate news with the confirmation of a data breach. OpenAI, the company behind the AI-powered chatbot, identified a vulnerability in its code’s open-source library that allowed some users to see another active user’s first and last name, email address, payment address, and the last four digits of a credit card number but not the full number itself. It is estimated that only 1% of users have been affected by the data leak, but this raises questions about the safety and security of the platform.
Just last month, employees from electronics giant Samsung were found using ChatGPT to share sensitive company data. Therefore, Samsung has declared a ban on usage of the chatbot on company-owned devices and internal networks, in order to protect company data from being shared on an unreliable platform. In lieu of this ban, Samsung is developing its own internal AI tools to assist employees in the software development process in a secure environment.
Meanwhile, Roy Akerman, Co-founder and CEO of Rezonate, has issued a statement detailing the risks large-language models pose to organizations. He warns developers of the difficulties that come with trying to retrieve or delete data that has been transmitted to AI platforms like ChatGPT, and advises companies to provide developers with clearer guidelines on the use of such technologies. He emphasizes that blanket restrictions are not enough to secure data privacy, and instead recommends providing users with further education on the implications of using AI methods.
In response to what has been the company’s ninth breach since 2018, T-Mobile issued an apology to customers that were impacted by this data breach, which included personal information, account information and PIN numbers from over 800 users. The telecom giant affirmed that it has safeguards in place to prevent such unauthorized access, yet reminded customers that it must continue to make improvements to stay one step ahead of malicious actors.