Enterprises Face Hesitation in Adopting AI Solutions as IBM Tackles AI Governance
IBM is taking on the challenging task of addressing AI governance to alleviate the hesitations surrounding the adoption of AI solutions by enterprises. The difficulty lies in managing the cost of governance while considering the behaviors of large language models (LLM). These models, such as hallucinations, data privacy violations, and the potential for outputting harmful content, pose significant concerns for organizations.
During an event in Zurich, Elizabeth Daly, the Research Manager of IBM Research Europe’s Interactive AI Group, emphasized the company’s commitment to developing AI that developers can trust. Daly noted the complexity of measuring and quantifying harmful content, stating, It’s easy to measure and quantify clicks, it’s not so easy to measure and quantify what is harmful content.
IBM recognizes that generic governance policies are insufficient for controlling large language models (LLMs). Thus, the company aims to develop LLMs that can utilize the law, corporate standards, and the internal governance specific to each enterprise as mechanisms of control. This approach allows governance to surpass corporate standards and incorporate the ethical considerations and social norms of the country, region, or industry in which the LLM is deployed.
By utilizing context-providing documents, LLMs can be rewarded for remaining relevant to their current tasks, resulting in a heightened ability to detect and fine-tune their outputs for harmful content that may violate social norms. This novel approach even enables AI to identify if its own outputs could be deemed harmful.
IBM has prioritized the development of LLMs on trustworthy data, implementing rigorous systems to detect, control, and audit potential biases at every stage. In contrast, off-the-shelf foundation models are often trained on biased data, making bias elimination even after training a challenge.
In alignment with the proposed EU AI Act, IBM aims to link AI governance with the intentions of its users. The company emphasizes the fundamental role of usage in its governance model, recognizing that some users may employ its AI for summarization tasks while others utilize it for classification purposes.
As we strive to address the interests and needs of our diverse audience, this article provides clear and concise information, avoiding unnecessary jargon and technical terms. Our aim is to present factual details that captivate and engage readers, without resorting to promotional language or explicit bias.
In a rapidly advancing technological landscape, IBM’s efforts to redefine fairness in AI governance offer hope for enterprises seeking to navigate the complexities surrounding AI solutions. By bringing balance to the cost of governance and the behaviors of large language models, IBM opens doors to a future where AI can be trusted, ethical, and aligned with individual ethics and social norms.
Disclaimer: This article is intended to provide general information only and does not constitute legal or professional advice. Readers are encouraged to seek appropriate advice before making any decisions based on the information provided.