On March 27th, 2023, the Punjab and Haryana High Court made a historic decision when it rejected bail for Jaswinder Singh who had been charged with the assault and murder of another person. Justice Anoop Chitkara made a reference to a ‘jurisprudence’ propounded by an Artificial Intelligence (AI) chatbot known as ChatGPT as part of his decision. ChatGPT is an AI software created by the US-based company Open AI and used mainly for human-like conversations. Using a sophisticated algorithm and a massive amount of data, ChatGPT can generate a wide variety of texts such as articles, stories and screenplays.
In the case of Jaswinder Singh, ChatGPT proposed that bail would likely be refused if the assailants had used cruelty in their assault; making a reference to this AI chatbot by the judge attracted considerable attention in the legal world. This raises an important question: what are the implications of such a decision?
It is evident that ChatGPT is not yet equipped to make legal decisions. It relies on publicly available information, which may contain a lot of misinformation. It can not access information that is private or behind paywalls, nor does it have access to the most up-to-date information as the newest training data from OpenAI is from 2021.
Furthermore, ChatGPT proved to be ill-equipped to answer other legal questions such as whether or not same-sex marriage should be legalised in India, if the Supreme Court of India should allow the abrogation of Art 370, or if the Supreme Court should review its NJAC judgement; providing mostly basic, unoriginal and outdated answers. As such, it is questionable whether or not a constitutional court should deprive someone of life or liberty based on unproven ChatGPT ‘jurisprudence’.
The implications of ChatGPT being considered by the court system can be seemingly far reaching. Allowing a technology such as ChatGPT to make legal decisions raises a lot of ethical questions. Can this AI be trusted to make the right decisions? How can it be maintained in the future in regards to accuracy and ethics? What sort of quality control measures must be in place?
Supreme Court CJI D.Y. Chandrachud has voiced his opinion in favour of AI and its implications for justice delivery. The Supreme Court is currently assisted by applications such as Supreme Court Vidhik Anwaad Software (SUVAS), and Supreme Court Portal for Assistance in Court Efficiency (SUPACE). However, while AI can be very effective in assisting the court in its decision making, one must remember that the ultimate decision must remain with the human judges.
ChatGPT garners huge attention due to its massive influence and the legal world is right to be cautious of its implications. For now, ChatGPT may not be equipped to make the right decisions, but the possibilities of it being used in the court system in the future cannot be discounted. Therefore, it is important that the use of AI in court systems should be well regulated and the ultimate decision should remain with qualified human judges and lawyers.