ChatGPT is a tool that has been developed out of the advancements in Artificial Intelligence (AI). It is a chatbot that provides intelligent answers to questions, designed to simulate conversations between people. This technology can be used for hosting conversations and providing legal advice, as well as for proofreading contracts and drafting legal agreements. Although there is great potential for AI in the legal sector, there is also an inherent risk in the storage of any data provided to it; this could be seen as a more significant risk in the legal profession, given its confidentiality obligations.
Open AI, the company providing the technology, has released a Terms of Use which states that they own all input into the chatbot, without providing any details on the storing or controlling of that information. This triggers a red flag for privacy professionals, as it is in conflict with many of the laws regulating the protection of personal information, such as the General Data Protection Regulation.
In response to this risk, the Regional Authority in Italy has put Open AI on notice and ordered them to desist from any activities related to the processing of personal data. This is an important step in beginning to ensure the privacy of personal information online, and it is highly likely that regulators across the globe will follow suit.
In order to protect ourselves and our clients’ confidentiality when using ChatGPT, we must be cautious in how we use the technology. This should include regularly reviewing the Terms of Use and Privacy Policy of the service and ensuring the software is compliant with all applicable data protection regulations, as well as exercising extra caution when entering personal data into it.
MothersEsquire is a networking platform for mothers in the legal field, offering mentorship and other tools to make the lives of resuming or entering mothers easier. This is an invaluable resource for mothers in legal, and any donations to the organization would be greatly appreciated by this community.
Ayesha Haq is a lawyer who was part of the initial testing of the first version of ChatGPT. Being around other legal professionals, she realized they were using the technology to proofread contracts and even draft full-blown legal agreements. She also believes that we have taken it too far by entering sensitive and personal information into the tool, with the risk of that information being stored indefinitely. Ayesha has written about the risks associated with ChatGPT and why privacy regulations can help protect us from its potentially damaging effects.