The U.S. Supreme Court is due to soon decide if Alphabet’s YouTube can be sued for its video recommendations. At stake is a powerful legal provision protecting technology firms from lawsuits over user-posted content. This case, however, could have implications on the development of emerging technologies like artificial intelligence chatbot ChatGPT from OpenAI, an organization backed by Microsoft.
The American law, known as Section 230 of the Communications Decency Act of 1996, can determine whether companies can be held accountable for defaming someone or breaching their privacy using AI models. These algorithms that power AI tools operate in a similar way to ones that suggest videos to YouTube users. During the February arguments, Supreme Court justices were uncertain over whether to weaken the shield provided by the law.
Democratic Senator Ron Wyden, one of the founders of the law, said that it shouldn’t apply to such tools, since they are creating content. He further noted that it is meant to preserve users and websites that host and organize users’ speech, not protect firms from the consequences of their own works or products.
Technologists, however, see it differently. They argue that chatbots are only taking existing online material and organizing it in a different way. That’s why Carl Szabo, Vice President and General Counsel of NetChoice, a tech trade organization, believes that a weakened Section 230 would prevent innovation and be an impossible task for AI developers.
That leaves experts to ponder a possible middle ground. Legal cases could explore the context in which the AI model was executed and decide whether Section 230 legal protection should be valid or not in that instance. For example, AI chatbots may produce original works with no connection to source material online.
Due to its implications, it is important to consider carefully how the court discusses the case. Companies like OpenAI and Google should watch closely, as it may shape future legal guidelines for them.