Government Neglects Racial Biases in AI, Exacerbating Inequality
As governments race to implement artificial intelligence (AI) technology to enhance public services and improve efficiency, there is a pressing concern that they have failed to address the racial biases embedded within these systems. While much attention has been given to the potential threats to privacy and civil liberties posed by AI, there has been insufficient focus on its negative impact on minority communities and how it perpetuates racial divisions.
To tackle this issue, government leaders must first confront the reality that AI platforms, increasingly utilized in areas such as public safety, hiring and recruiting, and data analysis, often carry biases that disproportionately affect different racial groups. Confronting these biases goes beyond conducting occasional audits or making vague statements about technological equity and justice. It requires proactive measures from government leaders to avoid overreliance on AI applications that are known to contain such biases.
It is widely acknowledged that many AI systems are flawed and perpetuate biases. Stephanie Dinkins, an artist researching AI-powered robots, highlights how biases become ingrained and automatic within these systems. She emphasizes the importance of nuanced recognition of Black individuals within algorithmic ecosystems. This recognition should extend to mayors and police chiefs using facial recognition AI in their cities.
Other professionals in the field of AI, particularly women and those from minority communities, have also highlighted similar issues and faced backlash. Timnit Gebru, a Black researcher, claims she was forced out of Google after co-authoring a paper on bias in the company’s search engine AI. Margaret Mitchell, her colleague and co-author, also left Google after leading its ethics in AI division. Bias in AI has received negative attention in previous instances, such as when a Google photo app mislabeled African Americans as gorillas, underscoring the urgent need for action.
The problems of AI extend beyond misidentifying individuals. They impact small-business loans, as AI algorithms erroneously reject creditworthy applicants. Economic development departments overlook low-income neighborhoods due to biased data used to incentivize projects. Moreover, qualified job applicants are often screened out early in the process due to programmed keyword biases. Public officials must ensure that discrimination of this nature is not allowed to persist.
Furthermore, AI is reshaping the job market, leading to job losses in sectors where minorities are overrepresented. A McKinsey Institute report indicates that sectors such as truck driving, food services, and office support positions, where minority communities are overrepresented, are particularly vulnerable. This places a greater burden on local governments to address the associated problems of unemployment, including hunger, homelessness, and insufficient healthcare and workforce development.
While some progress has been made, with proposed or enacted legislation in several states and a blueprint for an AI Bill of Rights issued by the Biden-Harris administration, more needs to be done. The adoption of AI must not perpetuate bias against protected classes, and AI must be proven safe, secure, and trustworthy.
As governments continue to embrace AI, it is crucial that they confront and rectify the biases within these systems. This requires proactive approaches, ongoing assessment, and a commitment to ensure that AI is implemented in a manner that promotes equity and justice. Only then can governments truly harness the benefits of AI while mitigating its negative impacts on racial inequality.