FTC Investigation: OpenAI Faces Probe on Consumer Protection Violations
The Federal Trade Commission (FTC) has launched an investigation into OpenAI, the company behind the popular ChatGPT, for potential violations of consumer protection laws. The move comes as European regulators have already taken action, and Congress is actively working on legislation to regulate the artificial intelligence (AI) industry.
In a 20-page demand for information sent to OpenAI during the week of July 10, 2023, the FTC requested details on user complaints regarding any false, misleading, disparaging, or harmful statements made by OpenAI. They are also looking into whether OpenAI engaged in unfair or deceptive practices that could pose risks or reputational harm to consumers. The agency has delved into OpenAI’s data collection processes, model training methods, human feedback procedures, risk assessment and mitigation strategies, as well as privacy protection mechanisms.
As an expert in social media and AI, I recognize the transformative potential of generative AI models. However, I must highlight the risks associated with these systems, especially in terms of consumer protection. These models have the capacity to produce errors, exhibit biases, and violate personal data privacy.
At the core of chatbots like ChatGPT and image generation tools like DALL-E lies the power of generative AI models. These models can create realistic content from text, images, audio, and video inputs, accessible through browsers or smartphone apps.
The versatility of AI models allows them to be fine-tuned for various applications across different domains, from finance to biology. With minimal coding requirements, tasks can be easily described in simple language, making adaptation efficient.
However, the lack of transparency surrounding the proprietary training data used by private organizations like OpenAI raises concerns. The public remains unaware of the nature of the data used to train models such as GPT-3 and GPT-4. The complex architecture of these models, like GPT-3’s 175 billion variables or parameters, makes it challenging to conduct audits. Consequently, it is difficult to determine if the construction or training of these models causes any harm.
One particular issue with language model AIs is the occurrence of hallucinations, confidently inaccurate responses not supported by training data. Even models designed to minimize hallucinations have experienced amplification.
The danger lies in the potential for generative AI models to produce incorrect or misleading information, which can be damaging to users. In a study focusing on ChatGPT’s scientific writing abilities in the medical field, it was found that the chatbot generated citations to non-existent papers and reported non-existent results. Similar patterns were observed in other investigations as well.
These hallucinations can result in tangible harm if the models are utilized without adequate supervision. For example, ChatGPT falsely accused a professor of sexual harassment, causing significant reputational damage. OpenAI also faced a defamation lawsuit after ChatGPT falsely claimed a radio host was involved in embezzlement.
Without proper safeguards, generative AI models trained on extensive internet-based data can inadvertently perpetuate existing societal biases. In applications like recruiting campaigns, unintended discrimination against certain groups of people may occur.
The FTC probe into OpenAI is an important step towards ensuring consumer protection in an ever-evolving AI landscape. Balancing the incredible potential of these models with their potential risks is crucial. By addressing concerns related to false information, biases, and privacy violations, regulators aim to foster an environment where AI can flourish without jeopardizing individuals or social cohesion.
As the conversation around AI regulation continues, it is imperative to strike the right balance between promoting innovation and safeguarding users’ rights. This involves transparent discussions, robust oversight, and ongoing collaboration between regulators, AI companies, and researchers to harness the benefits of AI responsibly and ethically.