Banks and financial services institutions are increasingly turning to artificial intelligence (AI) and generative AI tools like ChatGPT for innovation and future success. JPMorgan Chase CEO Jamie Dimon emphasized the significance of implementing new technologies, particularly AI and data, in his latest shareholder letter. JPMorgan Chase has already implemented over 300 AI use cases across various areas including marketing, customer experience, risk management, and fraud prevention.
The emergence of generative AI, large-language models (LLMs), and ChatGPT has caught the attention of financial institutions. Dimon expressed interest in leveraging tools like large language models, such as ChatGPT, to enhance employee productivity and workflow through human-centered collaborative tools. However, caution is crucial in adopting generative AI as it is essential to prioritize security, responsible AI practices, and stakeholder needs. While generative AI offers clear benefits, there are also risks associated with its adoption.
Earlier this year, major financial institutions, including JPMorgan Chase, Citi, Bank of America, Wells Fargo, and Goldman Sachs, placed restrictions on the use of ChatGPT by their employees. The conservative approach stems from the rigorous regulations banks must adhere to, such as know-your-customer (KYC) and anti-money-laundering (AML) laws. Security and compliance are of utmost importance in the banking industry.
Generative AI tools like ChatGPT and GPT-4 have demonstrated potential risks, such as generating false or misleading content. These models can produce hallucinations, making it difficult to understand the process behind their responses. With hundreds of billions of parameters, deciphering the inner workings of these models can be challenging. Moreover, generative AI models trained on publicly available content, like Wikipedia and Reddit, may present biases and fairness issues.
Another concern revolves around the use of APIs for generative AI models, which require banks to send information outside their private data centers. This poses compliance risks related to privacy and data residency, as security breaches have already occurred. OpenAI, the company behind ChatGPT, disclosed a security breach that exposed payment information for its subscription service, including usernames, emails, payment addresses, partial credit card details, and expiration dates.
Considering the challenges and risks associated with generative AI, banks and financial services should approach its adoption with caution. Customer-facing applications should be avoided for now, and instead, a more prudent approach involves experimenting with internal operations that do not involve sensitive data. Marketing, for example, can benefit from generative AI’s creativity to enhance campaign results. Service desk operations can also leverage natural language prompts to improve issue resolution processes, leading to cost reductions and increased efficiency.
Generative AI can be a valuable tool for employees to gain insights from internal proprietary content. Morgan Stanley has already piloted a program using OpenAI’s GPT-4 model, enabling financial advisors to ask questions based on company-generated research reports and commentary. As the stability of generative AI technology improves, more sophisticated projects can be undertaken.
While the pace of generative AI innovation is impressive, the risks associated with hallucinations and security breaches require a thoughtful approach from banks. Rushing into adopting generative AI would likely be a mistake, and instead, starting with internal applications that don’t involve sensitive data allows for real benefits while giving the technology time to mature. By prioritizing security, compliance, and stakeholder needs, banks can reap the rewards of generative AI while mitigating its potential risks.