New Zealand Police’s Use of AI Raises Concerns About Privacy and Bias
The use of artificial intelligence (AI) by the New Zealand police is drawing attention to modern policing methods and raising concerns about privacy and bias. Recently, it was revealed through an Official Information Act request by Radio New Zealand that the police are utilizing SearchX, an AI tool that can establish connections between suspects and their wider networks.
SearchX works by quickly identifying relationships between individuals, locations, criminal charges, and other factors that may pose a risk to police officers. The police claim that SearchX is a crucial component of a NZ$200 million front-line safety program developed after the tragic death of police constable Matthew Hunt in West Auckland, as well as other incidents of gun violence.
However, the use of SearchX and other AI programs has sparked questions about the intrusive nature of this technology, its inherent biases, and whether New Zealand’s existing legal framework is adequate to protect the rights of all individuals. Currently, the public has limited knowledge about the specific AI programs used by the police, as some are publicly disclosed while others remain undisclosed.
The police have acknowledged the use of Cellebrite, a controversial phone hacking technology. This program extracts personal data from iPhones and Android mobiles and can access over 50 social media platforms, including Instagram and Facebook. Another acknowledged AI tool is Briefcam, which aggregates video footage, including facial recognition and vehicle license plates. Briefcam allows the police to focus on and track individuals or vehicles of interest, reducing the time needed to analyze CCTV footage significantly.
Additionally, the police previously experimented with Clearview AI, a tool that utilized publicly accessible social media photographs to identify individuals, but it was discontinued due to controversy surrounding its trial, which had been conducted without clearance from police leadership or the Privacy Commissioner.
While AI holds promise in terms of its ability to predict and prevent crime, concerns regarding its usage by the police also exist. Programs like Cellebrite and Briefcam raise significant privacy issues, as they enable law enforcement to access and analyze personal data without consent or knowledge. Despite this, the use of these programs by the police is currently legal under the Privacy Act 2020, which allows government agencies, including the police, to collect, withhold, use, or disclose personal information if necessary for the maintenance of the law.
Privacy is not the only concern surrounding the use of these AI programs. There is a tendency to assume that AI decisions are more accurate than those made by humans, which may result in investigations focusing solely on suspects identified by AI systems, potentially neglecting other viable leads. The algorithms used in these programs can introduce biases, leading to misidentifications, especially concerning ethnic minorities and individuals from low-income backgrounds. Moreover, AI’s application in predictive policing raises questions, as it may rely on data from over-policed neighborhoods, exacerbating biases and directing even more police resources to these already heavily policed areas.
New Zealand currently lacks a specific legal framework for the use of AI, including its application by the police. Although the country has signed the Australia New Zealand Police Artificial Intelligence Principles and the Algorithm Charter for Aotearoa New Zealand, these are voluntary codes, leaving significant gaps in terms of legal accountability and police oversight. Unfortunately, the police have not fulfilled one of the fundamental requirements of the charter, which is to establish a point of inquiry for individuals concerned about the use of AI. Consequently, individuals in New Zealand must rely on the police to self-monitor their usage of AI technology.
As AI becomes increasingly prevalent in government agencies, New Zealand should follow the lead of Europe and implement regulations to ensure that the use of AI by the police does not create more issues than it aims to solve. The lack of transparency, accountability, and legal framework surrounding the use of AI in New Zealand raises concerns about privacy, bias, and the potential for misuse. Therefore, it is crucial to address these issues and establish clear guidelines and regulations to guide the responsible and ethical use of AI technology by the police.