Large Language Models (LLMs) like ChatGPT have gained significant attention and fame in 2023 due to their advanced language capabilities. However, many experts are still questioning if this technology is ready for prime time in global financial markets. In this article, we will explore the potential of ChatGPT in capital markets and the risks it brings in its current form.
One major risk of ChatGPT is the compromise of data privacy. Data is a bank’s most valuable asset, and they take great care in guarding it. Any risk of data compromise that could unveil sensitive information to competitors would be devastating for a bank. The way in which ChatGPT handles user data is highly problematic for the financial services industry, where confidentiality is key. ChatGPT aggregates the input data of users and uses it to train the model continuously to provide accurate information outputs. However, in capital markets, this pooling of data across institutions is strictly prohibited, and banks need platforms that can ensure data confidentiality.
Another challenge with ChatGPT is the need for human expertise to ensure the output is useful to traders and salespeople. For niche capital markets, banks require platforms purpose-built by experts to recommend actions that salespeople or traders may not have thought about. By doing so, banks can suggest appropriate opportunities to clients at the correct time, increasing their loyalty and share of mind among their clients. With ChatGPT, a user must know precisely what to ask it to get the output they desire, and this, in itself, is a skill.
Consistency of information is crucial in financial markets, and banks must provide credible, accurate, and consistent recommendations and advice to their clients. However, ChatGPT’s non-deterministic output is increasingly becoming a risk factor for banks. This inconsistency in generating the same answer to a similar question leaves banks open to risk. LLMs like ChatGPT are trained to produce an answer that makes sense rather than to be strictly correct.
Lastly, banks need to know why they are given certain intelligence and how the models underlying AI technology work. Model explainability is essential in the field of data analytics. The output generated by ChatGPT is massive, but it does not explain why it has generated that output. This can raise compliance challenges, with regulators requiring banks to explain why they made that particular decision.
In conclusion, while LLMs like ChatGPT provide exciting opportunities for AI, it needs to be tailored to the nuances of capital markets. Its combination with specialized human knowledge can increase its usefulness and reduce the risks it poses to banks. A measured approach is necessary, and banks should tread carefully and build the necessary foundations now.