Microsoft recently implemented new rules banning the use of AI for facial recognition by law enforcement agencies. This decision follows growing concerns over the potential bias and errors associated with AI technology.
The tech giant specifically amended the terms of service for its Azure OpenAI offering to prohibit police departments from using facial recognition technology in the United States. This move aligns Microsoft with other industry leaders like Amazon and IBM, who have also taken steps to limit the use of AI in law enforcement.
In a bid to prevent the misuse of facial recognition technology, Microsoft has now banned the use of real-time facial recognition on mobile cameras used by law enforcement globally. This includes the use of body-worn or dash-mounted cameras to identify individuals present in a database of suspects or prior inmates.
The decision by Microsoft comes amid growing concerns over the inherent biases and inaccuracies in AI technology. This includes instances where facial recognition tools have falsely identified individuals, leading to wrongful arrests. Furthermore, the use of generative AI tools like GPT-4 has raised privacy concerns due to the potential for false claims and racial bias.
Microsoft’s commitment to human rights and ethical AI practices has been evident for several years. In response to the Black Lives Matter protests in 2020, Microsoft President Brad Smith stated that the company would not sell facial recognition technology to police departments in the US until there is a national law governing its use.
With tech companies increasingly focusing on protecting human rights in the wake of global events, the ban on facial recognition for law enforcement reflects a broader commitment to ethical practices. As protests against police brutality continue to gain traction, the use of AI in law enforcement is likely to remain a contentious issue.