California Supreme Court Broadens Definition of Employer, Impacting AI Use in Hiring Decisions
In a recent ruling, the California Supreme Court has expanded the definition of employer under the state’s Fair Employment and Housing Act (FEHA), a key anti-discrimination statute. This expansion not only increases the number of defendants that can be held liable in FEHA actions but also has implications for the regulation of artificial intelligence (AI) in employment decisions.
The case in question, Raines v. U.S. Healthworks Medical Group, addressed the question of whether a business entity acting as an agent of an employer can be directly liable for employment discrimination under the FEHA. The court answered this question affirmatively, stating that such an agent can be considered an employer and held accountable for discriminatory practices if it has at least five employees and carries out FEHA-regulated activities on behalf of the employer. This ruling significantly expands the range of parties that may share liability in FEHA-related claims.
The California Supreme Court based its decision on the language of the FEHA, which defines employer to include any person acting as an agent of an employer. The court also considered the legislative history of the statute and looked to federal case law to support its interpretation. Importantly, the court distinguished this case from previous rulings that did not extend personal liability to supervisors for discrimination or retaliation claims.
While the court’s decision has immediate implications for discrimination claims, it also has broader ramifications for California’s efforts to regulate the use of AI in employment decisions. Businesses that provide AI-driven services for recruiting, screening, hiring, compensation, and other personnel management decisions may now be subject to joint and several liability across the AI tool supply chain.
The Fair Employment & Housing Council has proposed regulations addressing the use of AI, machine learning, and data-driven statistical processes in employment decision-making. These regulations make it unlawful for employers to use selection criteria, including automated decision systems, that disproportionately screen out applicants or employees based on protected characteristics, unless the criteria are job-related and consistent with business necessity. The regulations define agent broadly to include third-party providers of AI services related to personnel processes and redefine employment agency to cover these entities as well. Importantly, liability can be extended to those involved in the design, development, advertisement, sale, provision, and use of automated decision systems.
The California Supreme Court’s decision in Raines supports the Council’s proposed revisions and strengthens the joint and several liability of AI tool supply chains. It aligns with efforts to regulate AI use in employment decisions and ensures that all parties involved in providing AI services can be held accountable for potential discriminations.
This ruling serves as a reminder of the evolving legal landscape surrounding AI and employment practices. Businesses and AI service providers must be diligent in ensuring compliance with anti-discrimination laws and regulations to avoid potential legal repercussions. As AI continues to play a larger role in the hiring process, it becomes crucial for employers to evaluate their algorithms and screening criteria to mitigate biases and maintain a fair and inclusive hiring process.
Overall, the California Supreme Court’s decision expands the definition of employer under the FEHA, increasing the number of parties that can be held liable for employment discrimination. It also reinforces California’s efforts to regulate the use of AI in employment decisions and strengthens joint and several liability across AI tool supply chains. Businesses operating in California must be aware of these developments and ensure they comply with the evolving legal framework surrounding AI and discrimination in the workplace.