Title: AI Trust Research: Enabling Independent Decision-Making in Uncontrolled Environments
Decisions play a crucial role in our daily lives, shaping our actions and determining outcomes. In today’s rapidly advancing technological landscape, the integration of artificial intelligence (AI) systems has become increasingly prevalent. The ability to trust AI to make independent decisions in uncontrolled environments has become a significant area of research. Recognizing the importance of this challenge, a team led by Raytheon BBN and including Kairos Research, MacroCognition, and Valkyries Austere Medical Solutions, has embarked on a groundbreaking project focused on AI trust research and decision-making attributes.
Alice Leung, the principal investigator at Raytheon BBN, emphasizes the project’s aim to go beyond conventional AI capabilities. Rather than training AI on labeled data to identify specific objects or conditions, the focus is on developing AI systems that can make decisions independently when faced with unpredictable situations. Achieving this requires a deep understanding of how human experts evaluate complex information and make tough trade-offs to act decisively at critical decision points.
To gather insights into decision-making, the research team will employ cognitive interviewing techniques, engaging medical professionals and first responders as subject matter experts. By examining the cognitive processes and attributes these experts employ when making difficult decisions, the team aims to design scenario-based experiments that can highlight the differences between individuals and the impact of aligning attributes on the willingness to delegate decisions to others.
It is recognized that decision-making varies from person to person, negating the existence of a one-size-fits-all trusted AI model. Instead, the team envisions adapting AI systems to the user and domain at hand. By tuning an AI’s attributes such as risk tolerance, process focus, or flexibility in changing plans, the researchers believe it can be better aligned with the preferences and characteristics of both individuals and groups.
This project, sponsored by the Defense Advanced Research Projects Agency (DARPA) and the Air Force Research Lab, involves collaboration between multiple teams. Each team is dedicated to developing prototype AI decision-makers that can be fine-tuned to match pre-defined attributes. The research products from each team will be integrated and evaluated to determine how effectively the algorithmic agents can replicate decisions consistent with the target human attributes in challenging scenarios.
Another critical aspect of the research is evaluating the trust humans place in these aligned AI agents compared to baseline agents or other human decision-makers. In an effort to test trust, human experts will be presented with records of decisions made in challenging scenarios, without prior knowledge of whether the decision-maker was an AI or a fellow human.
The significance of this research extends beyond the confines of AI alone. By better understanding the decision-making process and building trust in AI systems, the possibilities for applications in fields such as healthcare, emergency response, and various complex domains become even more promising.
The work on this contract is being carried out in Cambridge, Massachusetts; Dayton, Ohio; and Anniston Alabama. With the collaboration of experts and the dedication of research teams, the project aims to unlock the full potential of AI in independent decision-making, shaping a future where AI and human expertise complement each other seamlessly, and decisions are made confidently even in the face of uncertainty.