A leading company is facing a potential ban on the use of its facial recognition system after a lawsuit exposed troubling failures in the technology. The lawsuit alleges that the company failed to take reasonable steps to monitor or test the accuracy of its facial recognition system and did not address the risks of racial or gender bias associated with the technology.
According to the complaint, employees expressed frustration over the high rate of false-positive match alerts generated by the system, particularly for enrollments from geographically distant stores. Despite these issues, the company failed to take appropriate action to rectify the accuracy problems.
In response to the lawsuit, a proposed order has been put forward that would ban the company from using any facial recognition or analysis system for security or surveillance purposes for a period of five years. Furthermore, the company would be required to delete all photos, videos, data, models, or algorithms derived from its facial recognition system operated between 2012 and 2020.
The proposed order encompasses the use of all automatic biometric security or surveillance systems, not just facial recognition. If the company wishes to use any such system in the future, it must implement a comprehensive monitoring program with strong technical and organizational controls. The program should address potential risks to consumers and ensure accurate functioning. If the system’s inaccuracies contribute to a risk of harm, the company must shut it down.
Under the settlement, the company would also be required to provide individualized, written notice to consumers added to its system and anyone affected by actions taken based on the system’s results. Additionally, the company would need to establish a consumer complaint procedure and clearly disclose its use of automatic biometric security and surveillance to consumers, both in retail locations and online.
To ensure compliance, the company must implement a comprehensive information security program, undergo biennial assessments from a third-party assessor, and provide an annual certification of compliance from its CEO.
The proposed settlement is subject to approval from the bankruptcy court, as the company is currently in bankruptcy. However, if approved, this order will serve as a groundbreaking example for future AI testing and compliance measures.
The Federal Trade Commission (FTC), which voted 3-0 in favor of filing the complaint and proposed order, emphasized the importance of preventing harm to consumers when using AI facial recognition and other automated systems that employ biometric information. The FTC’s action reflects a commitment to addressing unfair or faulty biometric surveillance and data security practices, even when nonmonetary harm is involved.
This case may set a precedent, prompting the FTC to pursue similar actions against other companies engaged in discriminatory or invasive facial recognition practices. Advocacy groups are urging major retail chains to cease using such technology or face consequences.
Companies utilizing AI or automated biometric surveillance technology should focus on providing proper notice, thoroughly vetting vendors, and implementing rigorous testing, assessment, and monitoring procedures. By meeting the standards outlined in this proposed order, companies can minimize the risk of regulatory action and ensure the protection of consumer rights.
In conclusion, this latest development underscores the critical need for companies to prioritize the accuracy and fairness of their facial recognition and surveillance systems. The proposed order offers a roadmap for future AI compliance and highlights the potential consequences of inadequate safeguarding measures in the emerging landscape of biometric technology.