In a world increasingly driven by artificial intelligence (AI), demand for transparency in decision-making processes has surged. Mike Capps, co-founder of Howso, a company specializing in explainable AI, emphasizes the importance of transparency in AI systems that impact critical aspects of our lives. Capps asserts that just as people scrutinize the ingredients in their breakfast food, they should demand clear insights into how AI systems reach conclusions that influence healthcare, education, and more.
The prevalence of AI in crucial decision-making processes, such as medical procedures, credit approvals, and parole determinations, has become a cause for concern. Capps highlights a significant issue with many AI systems: their opaqueness, often referred to as black box AI. These systems make final judgments without providing clear explanations of how those conclusions were reached, leaving users and stakeholders unaware of the decision-making criteria.
To challenge the prevalence of black box AI, Howso, formerly known as Diveplane, was founded by Mike Capps in 2018. The company’s unique approach, known as attributable AI, sets it apart. Howso’s AI engine allows users to trace a decision back to specific data points, making the decision-making process transparent and understandable. For example, in medical surgery recommendations, the system can pinpoint the 17 most crucial data points that influenced the decision, offering clarity and accountability.
The practical applications of Howso’s AI technology span various domains. Major retailers, such as Scanbuy, collaborate with the company to leverage its tool for customer intelligence, accurately predicting customer preferences in an explainable manner. Even educational institutions like N.C. State and UNC have recognized the value of transparent AI, adopting Howso’s technology for specific projects.
In September, Howso made a significant move towards fostering transparency by open-sourcing its AI engine. This empowers users to design their own explainable AI-driven platforms, further expanding the accessibility of transparent AI technology.
Notably, Howso has attracted impressive clients, including industry giants like Mastercard and Mutua de Madrileña, a Spanish insurance company. Additionally, the Virginia Department of Behavioral Health and Developmental Services has embraced Howso’s technology, enhancing their decision-making processes. These partnerships demonstrate the broad applicability and growing demand for AI systems prioritizing transparency and accountability.
Capps draws a parallel between transparency in AI and food labels, stressing the critical importance of transparency as a fundamental requirement for responsible software development. Just as consumers rely on nutrition labels to make informed choices about their food, individuals should demand clear insights into AI-driven decisions that impact their lives. Transparency ensures reliability, accountability, and the identification of potential biases or errors within AI systems.
One notable application where transparency is vital is in parole decisions. These determinations often rely on historical data, which may contain biases. When scaled up for efficiency, these biases can lead to unfair and discriminatory outcomes. Capps acknowledges the desire to streamline court processes but emphasizes that it should not come at the expense of perpetuating racial biases.
As the demand for AI continues to grow in various sectors, the call for transparency becomes stronger. Howso’s approach to transparent and accountable decision-making is gaining traction across industries, enabling users and stakeholders to trust and comprehend the AI systems that shape their lives.