Study Shows Humans Perceive Responsibility in AI-Assisted Decision Making
In a recent study, researchers have examined how human users perceive responsibility in AI-assisted decision making. The study, led by Louis Longin from the Chair of Philosophy of Mind, aimed to investigate who is deemed responsible when using real-life AI assistants in decision-making processes.
While autonomous AI systems might be responsible for making decisions in futuristic scenarios, current AI assistants primarily provide supportive information to human users, such as navigation and driving aids. Despite this distinction, the study found that even when humans view AI-based assistants purely as tools, they still assign partial responsibility for decisions to them.
The team, consisting of philosophers Louis Longin and Dr. Bahador Bahrami, along with Prof. Ophelia Deroy, Chair of Philosophy of Mind, conducted the study with 940 participants. The participants were presented with three scenarios: a human driver using a smart AI-powered verbal assistant, a smart AI-powered tactile assistant, or a non-AI navigation instrument. They were then asked to determine to what degree they saw the navigation aid as responsible and whether they considered it as a tool.
The results revealed an interesting contradiction in the participants’ perception. While they strongly regarded smart assistants as mere tools, they also viewed them as partly responsible for the successes or failures of the human drivers who consulted them. This division of responsibility was not observed for the non-AI powered instrument. Surprisingly, the smart assistants were seen as more responsible for positive outcomes rather than negative ones.
Dr. Bahrami suggests that people might apply different moral standards for praise and blame. When a potential accident is avoided and no harm occurs, individuals tend to assign credit more easily than blame to non-human systems. The study also found no significant difference in the perception of responsibility between smart assistants that used language and those that communicated through tactile signals.
The researchers highlighted the implications of their findings for the design and social discourse around AI assistants. They believe organizations developing and releasing smart assistants should consider the impact on social and moral norms. It is important to understand how individuals perceive responsibility and how these perceptions might shape interactions with AI systems.
The study demonstrates that AI assistants are seen as more than just recommendation tools. However, they are still regarded as distinct from human standards. As technology continues to advance, it is crucial to explore these perceptions further and address any ethical implications that may arise.
In conclusion, this study sheds light on how humans perceive responsibility in AI-assisted decision making. By understanding these perceptions, developers and organizations can better design AI systems that align with moral and social norms, ensuring responsible interactions between humans and AI assistants.