The US Federal Trade Commission (FTC) has launched an investigation into OpenAI, the creator of ChatGPT, a popular artificial intelligence (AI) language model. The probe is part of a broader examination by regulatory authorities into the ethical practices and potential harms associated with AI technology.
The investigation follows an announcement by OpenAI that it has entered into a licensing agreement with a global news agency for access to the agency’s archive of news stories. The financial details of the deal have not been disclosed.
OpenAI and other technology companies rely on vast amounts of written content from sources such as books, news articles, and social media to train their AI models. These large language models, including ChatGPT, have raised concerns regarding their ability to generate falsehoods that are difficult to detect due to their mastery of human language patterns.
Additionally, questions have been raised about the compensation owed to news organizations and individual creators whose work is used to train AI models. Over 4,000 writers, including notable authors like Nora Roberts and Margaret Atwood, signed a letter accusing AI developers of exploitative practices in mimicking and reproducing their language, style, and ideas.
The FTC’s investigation focuses on OpenAI, which is at the forefront of generative AI technology. ChatGPT, built on a large language model, can generate responses that are remarkably human-like, thanks to its training on vast amounts of text from the internet.
This investigation marks a significant development in ensuring ethical and responsible operation of AI companies. Divyansh Kaushik, Associate Director for Emerging Technologies and National Security at the Federation of American Scientists, stated that it could have implications for OpenAI and the broader AI innovation sector.
In recent months, the rapid advancement of generative AI technology, exemplified by ChatGPT, has prompted calls for regulation and a temporary halt in the training of advanced AI systems. OpenAI’s CEO, Sam Altman, has been a vocal advocate for regulatory measures, emphasizing the potential risks associated with AI if it goes awry.
The Center for Artificial Intelligence and Digital Policy filed a complaint in March, urging the FTC to intervene and establish safeguards to protect consumers and businesses.
The FTC is not only concerned with potential harms to consumers but also monitoring the competition in the AI space. The agency wants to ensure that dominant firms aren’t using their power to discriminate against competitors by controlling key inputs like data.
As part of its investigation into OpenAI, the FTC has requested detailed descriptions of any complaints related to false or misleading statements made by its products, as well as records pertaining to a security incident that occurred earlier this year.
The investigation of OpenAI by the FTC highlights the increasing scrutiny being placed on AI companies and the need for ethical and responsible practices in the field. As regulators become more proactive, companies must be prepared to operate within established guidelines to foster trust and ensure the safe development and deployment of AI technologies.