In today’s rapidly evolving world, responsible AI has become an essential component of generative AI applications. As companies strive to innovate and stay ahead in the technological race, the importance of considering the ethical implications of AI cannot be ignored. However, critics argue that responsible AI considerations have taken a backseat as companies compete to release new products and services.
Major players in the AI domain, such as Microsoft, Google, and OpenAI, have recently integrated generative AI models into their offerings. Although these models may not be perfect from the start, Krishnaram Kenthapadi, Chief AI Officer and Chief Scientist at Fiddler AI, believes that companies like Microsoft and Google have the expertise and resources to incorporate responsible AI practices throughout the development process. They can also improve their models over time by learning from issues and feedback.
Nevertheless, Kenthapadi expresses concern for enterprises that utilize open source models or APIs and fine-tune them with proprietary data. These companies may lack the expertise or resources to thoroughly test or monitor these models and applications once they are deployed. To address these concerns, Kenthapadi emphasizes the need for responsible tools to assist these enterprises in deploying AI models responsibly.
One approach to ensuring responsible deployment of AI is to consider it an integral part of the innovation process, rather than treating it as something separate. Kenthapadi suggests adopting a mindset of designing AI responsibly from the outset, similar to how privacy and security are prioritized. By viewing responsible AI as an essential aspect of innovation, enterprises can ensure that ethical considerations are embedded in the development process.
However, challenges still persist, particularly concerning hallucinations associated with large language models (LLMs). Kenthapadi highlights that while many enterprises are utilizing these models, hallucinations remain a significant issue. To address this, Fiddler AI and other organizations are exploring methods to verify model responses, utilizing additional models to detect fabrications and provide reliable information.
When it comes to the risks enterprises face when utilizing these models, Kenthapadi emphasizes that the level of risk varies depending on the specific application. If the model is used internally or by customer support agents to enhance responses, there might be some tolerance for hallucinations. However, in critical applications such as medical diagnosis, a higher bar is necessary to minimize hallucinations’ impact.
Ethical considerations are also crucial. Enterprises must not only test these models before deployment but also monitor them post-deployment. It is essential to evaluate performance, robustness, bias, and the inadvertent disclosure of Personal Identifiable Information (PII). Kenthapadi stresses the need for ongoing monitoring of these responsible AI dimensions, even after the applications are deployed.
Tools like those offered by Fiddler AI can assist in measuring these dimensions and facilitating continuous monitoring. The proactive assessment of responsible AI metrics and the prevention of potential ethical pitfalls are critical for enterprises in harnessing the power of generative AI while ensuring ethical and responsible deployment.
By considering responsible AI as an integral part of innovation, enterprises can navigate the challenges associated with generative AI applications. Continuous monitoring and the use of responsible tools will not only enhance the reliability and performance of AI models but also safeguard against ethical concerns. As the race to innovate continues, responsible AI practices must remain non-negotiable for organizations harnessing the power of artificial intelligence.