FTC Investigates OpenAI’s AI Data Security Amid Concerns over Privacy and Deceptive Practices
The Federal Trade Commission (FTC) is conducting an investigation into OpenAI, the company behind ChatGPT, focusing on issues related to AI data security and user privacy. This move highlights the growing concerns among regulators regarding the risks associated with artificial intelligence. The investigation was initially reported by The Washington Post, which obtained a letter sent by the FTC to OpenAI outlining its concerns and requests for information. According to the letter, the FTC aims to determine whether OpenAI has engaged in unfair or deceptive privacy or data security practices, as well as practices that could potentially harm consumers, including reputational harm.
OpenAI’s founder and CEO, Sam Altman, expressed disappointment that the investigation was leaked but assured compliance with the FTC’s requests in a tweet. However, the commission has not made a public announcement about the investigation itself.
Banks have recently started exploring the application of large language models, such as ChatGPT and Google’s Bard, primarily for internal purposes like organizing institutional knowledge and utilizing chatbots for customer service. However, the use of this technology has been limited to mitigate the associated risks and to address the concerns raised by regulators.
The FTC’s investigation encompasses a range of concerns raised by lawmakers during a May hearing, including the way OpenAI markets its technology to institutional customers like Morgan Stanley. This particular focus is significant since Morgan Stanley recently sought OpenAI’s assistance in utilizing AI to help its analysts sort through the company’s extensive collection of research reports.
One of the main aspects of the FTC’s inquiry revolves around potential false, misleading, or disparaging statements that OpenAI’s models could generate about individuals. For the banking sector, the most relevant aspect is OpenAI’s data protection practices and the security measures implemented to safeguard user information and the model itself.
The FTC has specifically requested details from OpenAI regarding a data breach that occurred in March. During this breach, certain ChatGPT Plus users were able to access other users’ payment-related information and chat titles. Although the breach did not expose complete credit card numbers, it did disclose users’ first and last names, email addresses, payment addresses, credit card types, and the last four digits of their credit card numbers. OpenAI responded to the breach by publishing technical details on how it occurred, attributing it to a server change that resulted in the inadvertent sharing of cached data between users.
The FTC has also inquired about OpenAI’s practices in handling users’ personal information, an area that has witnessed increased scrutiny from both the FTC and the Consumer Financial Bureau in recent rulemaking processes pertaining to financial data. Banks have faced similar scrutiny in the past, and a set of rules mandates banks to promptly inform regulators about any breaches involving consumer data.
Moreover, regulators and lawmakers have expressed concerns over the potential misuse of large language models. In the aforementioned Senate Judiciary Subcommittee hearing, Senator Josh Hawley raised questions about training AI models using data related to social media content that grabs users’ attention, highlighting the risk of manipulation in the ongoing war for clicks on social media platforms. While acknowledging the concerns, Altman clarified that OpenAI does not engage in such practices but acknowledged that other companies might employ AI models for accurate ad predictions.
The FTC’s investigation also delves into prompt injection attacks, where users manipulate the model into producing outputs it has been trained not to provide. Instances of users coaxing the model to divulge information such as explosive ingredients or Windows 11 keys have been documented. Users have even employed role-playing scenarios involving deceased relatives to obtain certain outputs. This aspect is of particular interest to the FTC in order to assess any potential risks associated with OpenAI’s model.
Banks that have adopted AI chatbots have taken precautions to limit the capabilities of these products to only what is necessary for banking operations. For instance, Capital One’s Eno chatbot cannot answer certain basic questions, such as confirming whether it is a large language model. This cautious approach aims to mitigate the risk of providing erroneous information to clients. Regulators closely monitor various aspects of customer service, including response times, chat durations, and accuracy.
As the FTC investigation unfolds, it highlights the need for robust data security measures and privacy practices in AI applications. OpenAI and other companies utilizing large language models will need to address these concerns to ensure compliance with regulatory standards, particularly when partnering with institutions such as banks, where protecting customer data and maintaining trust is of utmost importance.