Boosting Acceptance: Human Input Key to AI for Public Services
A recent study has highlighted the crucial role of human input in increasing acceptance and adoption of artificial intelligence (AI) for public services. The research, conducted by academics from Birkbeck, University of London and the University of Exeter, reveals that citizens not only worry about the fairness of AI, but also have concerns about potential human biases. They are more likely to support the use of AI when administrative discretion is seen as excessive.
The study, involving 2,143 participants in the UK, focused on citizens’ views regarding the implementation of AI in systems that process immigration visas and parking permits. The findings indicate that including more human involvement in AI systems tends to enhance acceptance. However, when substantial human discretion is introduced in scenarios related to parking permits, respondents prefer limited human input.
Several system-level factors were found to have significant impact on acceptance and perceived fairness. These include high accuracy of AI systems, the presence of an appeals process, increased transparency, reduced cost, non-sharing of data, and the absence of involvement from private companies.
Dr. Laszlo Horvath from Birkbeck University stated, Our results suggest that citizens resist the accumulation and sharing of their personal data. However, if the systems are accurate and cost-effective, citizens are willing to forgo heavy human supervision.
Professor Susan Banducci added, Our research provides valuable insights into technology acceptance in digital government and AI. Citizens who may generally be resistant to new technologies are more inclined to support greater human administrative involvement.
Another significant finding is that citizens appeared to prioritize the cost and accuracy of the technology over concerns about human involvement, transparency, or data sharing. This suggests that citizens perceive the legitimacy of the system based on its efficiency and ability to deliver precise and cost-effective results.
The implications of this study are wide-ranging, as many routine interactions with the government involve permit applications similar to those examined in the research. The findings have broad relevance for government services that employ AI.
Further research and exploration are needed to fully understand the intricacies of citizen acceptance and concerns related to AI in the realm of public services. As AI continues to advance and become more integrated into our daily lives, it is crucial to strike a balance between technological efficiency and human involvement to achieve widespread acceptance and trust in AI-driven systems.
In summary, the study underscores the importance of increased human input to overcome barriers to AI acceptance in public services. By addressing concerns about fairness and human biases, alongside enhancing the accuracy and cost-effectiveness of AI systems, we can foster greater understanding, trust, and support for this transformative technology in our society.