Regulators in the US and Worldwide Increasing Regulatory Scrutiny on Generative AI and ChatGPT
Regulators, both in the United States and around the world, are showing greater concern about the potential risks of using generative artificial intelligence (AI) systems for commercial, business or personal purposes. Whether the AI is applied for voice recognition, automation, marketing or other purposes, regulators are scrutiny the risk of using these AI technologies much closer. In the US, the Federal Trade Commission (FTC) has discussed the implications of such AI technologies, warning companies to consider the interests and risks when deploying generative AI.
FTC Commissioner Alvaro Bedoya mentioned how AI is already regulated by the FTC through the Section 5 Unfair and Deceptive Practices law. Companies making, selling or using AI must be mindful of the FTC’s authority in this area and potential enforcement if a deceptive claim is made. The FTC blog post in March 2023 also advised companies to assess the potential risks of deploying AI tools that are able to create synthetic media. As such deepfakes and voice cloning may create potential privacy and fraud risks, and the FTC offered guidance to reduce or mitigate such risks.
An example of the potential misuses of AI is ChatGPT, an AI powered language app. Regulators have begun to express skepticism on the apparent lack of privacy safeguards and its ability to generate misinformation. Italy made headlines for being the first country in the West to issue a ban on ChatGPT, which was subsequently adopted by France and Ireland. Germany is said to also be considering blocking ChatGPT due to data security concerns as well as Brian Hood, mayor of Hepburn Shire in Australia, contemplating legal action after ChatGPT produced factually incorrect details about his record.
The FTC has also issued guidance on generative AI for advertising, stressing that it must be transparent about how an AI product operates and what its capabilities are. Moreover, any claims that AI technology can inherently outperform non-AI counterparts must be proven and must not be deceptive.
Finally, the FTC recommends that companies collecting consumer data to train AI systems should seek consent in a transparent manner. This advice is especially important, given that many countries are introducing more stringent privacy laws.
OpenAI and ChatGPT
OpenAI is the company behind the AI powered language app, ChatGPT. Founded in 2015, OpenAI is a research laboratory consisting of an international network of over 100 artificial intelligence researchers and professionals working together to advance artificial intelligence technologies. With the aim of “advancing digital intelligence in the way that is most likely to benefit humanity as a whole”, OpenAI works on projects such as understanding deep learning, generative technologies, reinforcement learning and robotics.
OpenAI’s application, ChatGPT is a generative AI language model designed to have dialogue with users. Initially built for use in customer service conversations, the application has been met with a lot of regulatory scrutiny and bans from various countries due to privacy concerns.
Looking Ahead
Regulatory scrutiny on generative AI and OpenAI is likely to continue to increase. Companies using generative AI technologies or developing new AI products should take proactive steps to ensure they comply with the FTC’s guidance and rules established by international privacy regulators. New York City has already passed regulations on how automated decision tools can be used in employment and the National Telecommunications and Information Administration has requested feedback on how AI audits and certifications can increase trust in AI systems. To be on the safe side, companies should proceed cautiously and be aware of the increasing regulatory oversight.