The imminent ruling from the US Supreme Court on YouTube’s legal case could decide the fate of new technologies such as AI chatbot ChatGPT, as well as the legal protections needed for its generative AI tools. The court’s decision, due in June, will determine whether a law protecting technology platforms from legal responsibility for content created by users – Section 230 of the Communications Decency Act of 1996 – also applies when companies use algorithms to recommend videos or other items to users.
In February, justices voiced doubts on whether to abate the legal protection, raising the question of whether generative AI tools should benefit from Section 230 when it comes to claims of defamation or privacy violations. Cameron Kerry, a Brookings Institution fellow and AI expert, explained that the same type of debate is present when discussing a chatbot, as they are reliant on recommendation engines that are capable of shaping content.
The case is an indication of the growing conversation about how Section 230 immunity might be applied to generative AI models. Section 230 protects third-party content from users of a technology platform and not necessarily to information a company played a role in developing. The courts have not made a decision yet on how a response from an AI chatbot would be legally viewed, but Justice Gorsuch pointed out that such tools as those behind ‘poetry’ and ‘polemics’ are unlikely to be protected.
Representatives from OpenAI and Google did not comment on the matter, and Democratic Senator Ron Wyden, who was involved in drafting the 1996 act, believes that protection should not apply to these tools. Additionally, Carl Szabo, of tech industry trade group NetChoice, argued that AI is not creating content but rather organizing existing data in a different presentation.
The implications of the upcoming US Supreme Court decision on YouTube can be extended to the AI chatbot industry, and the industry experts need to be prepared for any potential shakeup. If the courts find Section 230 protections to be inapplicable to these tools, AI companies would be exposed to a potential influx of litigation, something that could hold back innovation.
If deemed applicable, the courts may take a middle ground approach, taking context into account in order to determine whether protection should be granted. If the AI model is found to reproduce existing online sources, then the shield may still apply. But the issue gets complicated when the model produces original works unrelated to those online sources.
OpenAI is a machine learning research laboratory founded by entrepreneur and investor Elon Musk, and one of its flagship projects is ChatGPT, a generative AI chatbot. It and its successor GPT-4 operate in a manner similar to the algorithms used by YouTube, making the outcome of the court case especially relevant to OpenAI. A weakened Section 230 liability shield could have dire consequences for the AI chatbot industry, and it is likely that the case will have a considerable influence on the legal opinions formed in the future.