Navigating the California Civil Rights Council’s Draft Regulations on AI
As technology continues to evolve at a rapid pace, it is critical that our legal and regulatory systems keep pace. Recently, the California Civil Rights Council (CRC) (formerly the Fair Employment and Housing Council) has made significant strides in this regard, proposing draft regulations aimed at addressing the potential biases and discrimination that can arise from the use of artificial intelligence (AI) and automated decision systems in the employment sector.
The draft regulations were informed by a hearing on “Algorithms & Bias,” indicating a keen awareness of how these advanced technologies might mask or perpetuate bias in important areas of people’s lives such as employment. The CRC aims to update its regulations to include newer technologies such as algorithms, referred to as an “automated decision system” (ADS). ADS includes computational processes derived from machine learning, statistics, or other data processing or artificial intelligence techniques that impact employees or applicants by making or facilitating human decision-making processes. Examples of ADS range from algorithms that screen resumes for particular terms or patterns to those employing online tests to measure personality traits, aptitudes, and cognitive abilities.
Under these draft regulations, it would become unlawful for an employer to use selection criteria that screen out applicants or employees based on characteristics protected by the Fair Employment and Housing Act (FEHA) unless these criteria are job-related and consistent with business necessity or the employer can show that “there is no less discriminatory policy or practice that serves the employer’s goals as effectively as the challenged policy or practice.” The draft regulations also extend the current record-keeping requirements to include machine-learning data and increase the retention period from two to four years and shall include “[a]ny personnel or other employment records created or received by any employer or other covered entity dealing with any employment practice and affecting any employment benefit or any applicant or employee (including all applications, personnel, membership or employment referral records or files and all automated-decision system data). Importantly, these record-keeping requirements extend to third-party vendors who provide these technologies, requiring them to retain records of the assessment criteria used by the ADS for each employer or entity they serve.
California is also considering the Automated Decision Systems Accountability Act (AB 331), which would mandate companies to disclose their use of automated decision tools and their purpose, conduct annual impact assessments to identify and mitigate potential biases in their AI systems, and allow for suits against employers for the discriminatory impact of AI tools.
Though not explicitly stated, it can be inferred that the liability could potentially extend to third parties who act on behalf of an employer by providing services related to various facets of employment if their activities adversely affect the terms or conditions of employment. This is due to the broad interpretation of the terms “employer” and “employment” in these contexts, and the fact that third-party vendors are explicitly mentioned in the context of record-keeping requirements.
The proposed regulations and bills represent a significant step forward for California in the legal recognition of the potential risks and challenges posed by AI and ADS in employment decisions, and they serve as a sign of an emerging era of accountability in the use of these technologies.