As the US Supreme Court is soon to make a decision concerning the weakening of a powerful privacy shield protecting internet companies, this could also have an influence on technologies which are rapidly developing, such as AI chatbot ChatGPT by OpenAI. Microsoft is a major investor in this company. This decision could determine the legal implications for such organizations which develop generative AI tools for tasks like defamation or privacy issues.
The case being reviewed in the court does not directly relate to generative AI, but the way algorithms handle operations like YouTube suggesting videos to its users and ChatGPT, both can be compared. Specifically, the Supreme Court is trying to decide whether the US Communications Decency Act of 1996 granting digital partners legal security for hosting user content, should also apply when companies use their algorithms to target customers with recommendations.
The thoughts on this situation vary. Carl Szabo, Board Member of NetChoice, which belongs to tech-industry, claims that AI is not originally creating anything, and it is just collecting and reorganizing data. On the other side, Democratic Senator Ron Wyden, former member of the House of Representatives, believes that if companies are using AI to create content, then the law protecting them should not be applied.
If the court decides to weaken the law, AI developers risking to be exposed to a great number of complaints, which could result in restriction of the development of such technologies. While the regulations could vary in different contexts, the whole debate ultimately is focusing on the organizations’ liability when it comes to generated content from AI tools like ChatGPT or Bard from Google. According to experts, the court should try to find the middle ground solution, determining if the app is operating with data or creating completely new material.
Cameron Kerry, a fellow from the Brookings Institution, said that such issues are relevant for the AI-based chatbot as well. If the court decides that the responsibility should be placed on the developer, this could result in companies producing safer products, according to Hany Farid, a professor from the University of California at Berkeley.
OpenAI and Google did not provide a statement on the issue. The Supreme Court is expected to announce its decision by the end of June.