As the world continues to adopt transformative technology like ChatGPT, there are growing concerns about the legal implications of generative artificial intelligence platforms. Reports have already surfaced regarding misrepresentations and defamatory comments generated by ChatGPT, raising novel legal questions about whether defamation and product liability theories can address false statements made by these models.
For almost three decades, Section 230 immunity has stymied nearly every civil claim brought against big tech for the republishing of third-party material, but new large language models operate in a fundamentally different way. They don’t present users with third-party material, but rather generate original content in response to user queries.
These algorithms, like any technology solution, have inherent weaknesses that include hallucinations, inconsistent accuracy, and the risk of inherent bias towards underrepresented groups. Are we on the cusp of a flood of lawsuits over actionable content created by LLMs such as ChatGPT? In answering this question, the conversation to date has focused on whether Section 230 applies to ChatGPT and other LLMs.
A gap in existing law necessitates that AI-specific regulation be adopted that affords victims relief when LLMs make false statements. Time will tell whether lawmakers pass legislation that affords LLMs immunity or legislation that sets forth legal standards for these models. The days appear numbered of technology providers using Section 230 to quickly dispose of civil claims pre-discovery as part of a motion to dismiss.