FTC investigation highlights AI data security concerns at OpenAI

Date:

FTC Investigates OpenAI’s AI Data Security Amid Concerns over Privacy and Deceptive Practices

The Federal Trade Commission (FTC) is conducting an investigation into OpenAI, the company behind ChatGPT, focusing on issues related to AI data security and user privacy. This move highlights the growing concerns among regulators regarding the risks associated with artificial intelligence. The investigation was initially reported by The Washington Post, which obtained a letter sent by the FTC to OpenAI outlining its concerns and requests for information. According to the letter, the FTC aims to determine whether OpenAI has engaged in unfair or deceptive privacy or data security practices, as well as practices that could potentially harm consumers, including reputational harm.

OpenAI’s founder and CEO, Sam Altman, expressed disappointment that the investigation was leaked but assured compliance with the FTC’s requests in a tweet. However, the commission has not made a public announcement about the investigation itself.

Banks have recently started exploring the application of large language models, such as ChatGPT and Google’s Bard, primarily for internal purposes like organizing institutional knowledge and utilizing chatbots for customer service. However, the use of this technology has been limited to mitigate the associated risks and to address the concerns raised by regulators.

The FTC’s investigation encompasses a range of concerns raised by lawmakers during a May hearing, including the way OpenAI markets its technology to institutional customers like Morgan Stanley. This particular focus is significant since Morgan Stanley recently sought OpenAI’s assistance in utilizing AI to help its analysts sort through the company’s extensive collection of research reports.

See also  Volkswagen to Equip Future Vehicles with Conversational AI, Enhancing Driving Experience

One of the main aspects of the FTC’s inquiry revolves around potential false, misleading, or disparaging statements that OpenAI’s models could generate about individuals. For the banking sector, the most relevant aspect is OpenAI’s data protection practices and the security measures implemented to safeguard user information and the model itself.

The FTC has specifically requested details from OpenAI regarding a data breach that occurred in March. During this breach, certain ChatGPT Plus users were able to access other users’ payment-related information and chat titles. Although the breach did not expose complete credit card numbers, it did disclose users’ first and last names, email addresses, payment addresses, credit card types, and the last four digits of their credit card numbers. OpenAI responded to the breach by publishing technical details on how it occurred, attributing it to a server change that resulted in the inadvertent sharing of cached data between users.

The FTC has also inquired about OpenAI’s practices in handling users’ personal information, an area that has witnessed increased scrutiny from both the FTC and the Consumer Financial Bureau in recent rulemaking processes pertaining to financial data. Banks have faced similar scrutiny in the past, and a set of rules mandates banks to promptly inform regulators about any breaches involving consumer data.

Moreover, regulators and lawmakers have expressed concerns over the potential misuse of large language models. In the aforementioned Senate Judiciary Subcommittee hearing, Senator Josh Hawley raised questions about training AI models using data related to social media content that grabs users’ attention, highlighting the risk of manipulation in the ongoing war for clicks on social media platforms. While acknowledging the concerns, Altman clarified that OpenAI does not engage in such practices but acknowledged that other companies might employ AI models for accurate ad predictions.

See also  Datadog Inc: Riding the AI Boom to Skyrocketing Growth in 2023

The FTC’s investigation also delves into prompt injection attacks, where users manipulate the model into producing outputs it has been trained not to provide. Instances of users coaxing the model to divulge information such as explosive ingredients or Windows 11 keys have been documented. Users have even employed role-playing scenarios involving deceased relatives to obtain certain outputs. This aspect is of particular interest to the FTC in order to assess any potential risks associated with OpenAI’s model.

Banks that have adopted AI chatbots have taken precautions to limit the capabilities of these products to only what is necessary for banking operations. For instance, Capital One’s Eno chatbot cannot answer certain basic questions, such as confirming whether it is a large language model. This cautious approach aims to mitigate the risk of providing erroneous information to clients. Regulators closely monitor various aspects of customer service, including response times, chat durations, and accuracy.

As the FTC investigation unfolds, it highlights the need for robust data security measures and privacy practices in AI applications. OpenAI and other companies utilizing large language models will need to address these concerns to ensure compliance with regulatory standards, particularly when partnering with institutions such as banks, where protecting customer data and maintaining trust is of utmost importance.

Frequently Asked Questions (FAQs) Related to the Above News

What is the FTC investigating regarding OpenAI?

The Federal Trade Commission (FTC) is investigating OpenAI's AI data security and user privacy practices.

What are the concerns raised by the FTC?

The FTC aims to determine whether OpenAI has engaged in unfair or deceptive privacy or data security practices, as well as practices that could potentially harm consumers, including reputational harm.

What does OpenAI's founder and CEO say about the investigation?

OpenAI's CEO, Sam Altman, expressed disappointment about the investigation being leaked but assured compliance with the FTC's requests.

How are banks using large language models like ChatGPT?

Banks are mainly using large language models for internal purposes, such as organizing institutional knowledge and utilizing chatbots for customer service, but with limitations to address the associated risks.

What specific concerns are lawmakers and the FTC focusing on?

Lawmakers and the FTC are particularly concerned about potential false, misleading, or disparaging statements that OpenAI's models could generate about individuals, as well as OpenAI's data protection practices and security measures.

What data breach incident has the FTC asked OpenAI about?

The FTC has specifically inquired about a data breach in March where certain ChatGPT Plus users were able to access other users' payment-related information and chat titles.

How have regulators expressed concerns about the misuse of large language models?

Regulators are concerned about training AI models using data related to social media content that manipulates user attention, as well as potential risks associated with prompt injection attacks and users coaxing models to provide sensitive information.

What precautions have banks taken when adopting AI chatbots?

Banks have limited the capabilities of AI chatbots to only what is necessary for banking operations, in order to mitigate the risk of providing erroneous information to clients.

What does the FTC investigation highlight regarding data security and privacy in AI applications?

The investigation highlights the need for robust data security measures and privacy practices in AI applications, especially when partnering with institutions like banks where customer data protection and trust are crucial.

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Aryan Sharma
Aryan Sharma
Aryan is our dedicated writer and manager for the OpenAI category. With a deep passion for artificial intelligence and its transformative potential, Aryan brings a wealth of knowledge and insights to his articles. With a knack for breaking down complex concepts into easily digestible content, he keeps our readers informed and engaged.

Share post:

Subscribe

Popular

More like this
Related

Sino-Tajik Relations Soar to New Heights Under Strategic Leadership

Discover how Sino-Tajik relations have reached unprecedented levels under strategic leadership, fostering mutual benefits for both nations.

Vietnam-South Korea Visit Yields $100B Trade Goal by 2025

Vietnam-South Korea visit aims for $100B trade goal by 2025. Leaders focus on cooperation in various areas for mutual growth.

Albanese Government Unveils Aged Care Digital Strategy for Better Senior Care

Albanese Government unveils Aged Care Digital Strategy to revolutionize senior care in Australia. Enhancing well-being through data and technology.

World’s First Beach-Cleaning AI Robot Debuts on Valencia’s Sands

Introducing the world's first beach-cleaning AI robot in Valencia, Spain - 'PlatjaBot' revolutionizes waste removal with cutting-edge technology.