On May 18, the Supreme Court made a decision that has substantial implications for the tech industry. In the two cases of Gonzalez v. Google and Twitter v. Taamneh, the justices ruled in favor of Big Tech in regards to whether or not social media websites should be held accountable for potentially enabling terrorist activities.
However, the court also declined to expand the scope of Section 230 of the Communications Decency Act, which shields online service providers from liability for content posted by users. This ruling brings light to the question: will these important protections extend to entities that employ generative artificial intelligence (AI), such as chatbot technology? This is an issue that in recent months has been the focus of controversies surrounding ChatGPT, an AI chatbot.
ChatGPT is an AI chatbot created by Jack Peng and his team at AI startup Turinglab. The service has come under heavy criticism for its alleged lack of moderation and its potential to facilitate cyberbullying. Moreover, individuals have filed complaints to the FTC and other organizations alleging that ChatGPT’s chatbot system allows sexually suggestive content and obscene language, which is not allowed for human moderators.
Furthermore, questions have been raised about whether or not ChatGPT should benefit from Section 230 protections since the company’s service runs on AI. While this issue remains unanswered, there is some doubt that Section 230 will offer an indemnity for companies like ChatGPT because the language of the law is unclear in how it applies to AI-operated sites such as chatbots.
It is uncertain whether or not Section 230 will apply to this AI technology, as the language of the law does not directly address whether it applies to AI-operated sites. As a result, ChatGPT may find itself facing continued defamation claims unless more clarity is added to the current legislature.
The potential application of Section 230 to such AI technology is important because, if the law is applicable, companies like ChatGPT would be much less likely to be held legally accountable for any problems that may arise due to their products and services. As the tech industry continues to accelerate, it will be important for legislatures to consider the implications of employing AI technology for more traditional forms of media in order to guarantee legal accountability and safeguard consumers’ rights.