Iliff Innovation Lab Launches AI TRUST, New Review Process to Aid Businesses in Ethical AI Adoption
Iliff Innovation Lab recently unveiled AI TRUST, a novel review framework meant to assist businesses in evaluating and certifying artificial intelligence (AI)-based technologies for their productivity, safety, and equity. The goal is to empower companies with knowledge and tools essential for navigating the intricate landscape of AI technology. The Iliff Innovation Lab draws upon its expertise at the intersection of technology and ethics to create AI TRUST and explore innovative approaches to AI and related technologies.
AI TRUST collaborates with organizational leaders through a collaborative review process to evaluate and score AI technology, enabling companies to anticipate and mitigate risks associated with data collection, privacy, bias, and legal issues in a swiftly changing regulatory environment. Research indicates that only 3 in 10 enterprise executives have comprehensive AI policies and protocols in place, highlighting the need for frameworks like AI TRUST.
Created with significant human data, AI technologies can often reflect inherent biases, necessitating a closer look at their ethical and legal implications. Dr. Michael Hemingway, director of design and data science at the Iliff School of Theology, stressed the importance of ongoing human oversight and pressure-testing AI solutions to prevent biases and inaccuracies. AI TRUST fills this gap by providing knowledge and collaborative support for teams to better understand and adjust their AI solutions proactively.
The Iliff School of Theology, with over 125 years of experience, has been a pioneer in theological education focusing on peace, justice, and ethics. AI TRUST complements the school’s Diversity, Equity, and Inclusion (DEI) training, helping companies develop initiatives to foster inclusivity and celebrate differences among employees.
The AI TRUST process benefits organizations by fostering trust and ensuring quality technology outputs for partners and customers. By offering a step-by-step approach to inform and pressure-test planned software rollouts, companies can earn an AI TRUST certification, validating the safety, responsibility, and reliability of their technology.
WellPower, a prominent community mental health center in Colorado, utilized the AI TRUST review process to assess AI tools alignment with their values. By managing bias in their AI systems and ensuring adherence to professional and legal standards, WellPower can now confidently deliver quality care to their clients while proactively addressing ethical and regulatory concerns.
In conclusion, the Iliff Innovation Lab’s AI TRUST is poised to revolutionize how companies adopt AI technology by promoting ethical practices, accountability, and equity in the tech industry. Through strategic collaboration and innovative methodologies, AI TRUST aims to create a more responsible technological landscape that fosters trust and innovation.