Title: Embracing Responsibility with Explainable AI: A Human Perspective
As the world embraces the transformative power of AI, understanding the decisions made by AI models becomes crucial. According to Madhu Narasimhan, EVP and head of innovation, strategy, digital, and innovation at Wells Fargo, explainability is not just a technological challenge – it is fundamentally a human issue. Narasimhan emphasized the importance of humans being able to explain and comprehend how AI models arrive at their inferences during a fireside chat at the VentureBeat Transform 2023 event in San Francisco.
Narasimhan shared Wells Fargo’s approach to building AI models with explainability in mind. The company conducts extensive post hoc testing on its virtual assistant, Fargo, to gain insights into the language interpretation of the model. They also have an independent group of data scientists who validate the models separately. By integrating explainability into the model development process, Wells Fargo ensures that its virtual assistant accurately meets customer expectations.
The goal, as Narasimhan pointed out, is to create AI models that mimic human behavior. Achieving this requires addressing biases inherent in human decision-making. Wells Fargo takes proactive steps to identify and manage bias at both the attribute and dataset levels during the model development process. However, Jana Eggers, cofounder and CEO of Nara Logics, stressed the importance of not exclusively cleaning data. Instead, she advocated for models that can be adjusted and tuned to recognize bias, much like humans.
Understanding the capabilities and limitations of generative AI is crucial in a world where complex models are becoming increasingly prevalent. Wells Fargo acknowledges this and has developed an open access toolkit that allows other financial institutions to interpret their Python models. Narasimhan is excited about the opportunity to share this toolkit, as it encourages collaboration and promotes transparency in explaining AI models.
The conversation emphasized the need to embrace responsibility when developing and deploying AI systems. By prioritizing explainability and addressing biases, businesses can ensure that AI models align with human expectations. Ultimately, the collaborative efforts of organizations like Wells Fargo are instrumental in advancing the AI revolution while preserving the human element and ethical considerations.
In conclusion, explainable AI is not just about the technology; it is about empowering humans to understand and trust the decisions made by AI models. By consistently testing for explainability and addressing biases, businesses can build AI systems that behave like humans. The open access toolkit developed by Wells Fargo further promotes transparency and collaboration in the field. As AI continues to shape our future, understanding the inner workings of these models becomes essential. Together, businesses and individuals can navigate the generative AI revolution responsibly, ensuring that AI is a force for good.