June 17, 2024

TrafficMouse

Think Differently

California Proposes New Anti-Discrimination Rules When Artificial Intelligence Impacts Hiring

In response to growing concerns about algorithmic bias in employment practices, the California Civil Rights Council (The Council) has proposed amendments to the Fair Employment and Housing Act (FEHA) to address discrimination arising from the use of automated decision systems. The proposed amendments align with broader efforts, including the White House’s Blueprint for an AI Bill of Rights and the Equal Employment Opportunity Commission’s (EEOC) guidelines on algorithmic fairness, to ensure that technological advancements do not perpetuate existing biases or create new forms of discrimination in the employment lifecycle.

Definition and Scope of AI Under the Act

The proposed amendments define an “automated decision system” as a computational process that screens, evaluates, categorizes, recommends, makes, or facilitates decisions impacting applicants or employees. This includes systems using machine learning, algorithms, statistics, or other data processing or artificial intelligence techniques. This definition is generally consistent with other AI laws, such as the Colorado Artificial Intelligence Act and New York City’s Local Law 144.

The proposed amendments cover various activities, processes, tools, and solutions performed by automated decision systems, including:

  • Using computer-based tests to make predictive assessments about an applicant or employee; measure skills, dexterity, reaction time, and other abilities or characteristics; or measure personality traits, aptitude, attitude, and cultural fit.
  • Directing job advertisements or other recruiting materials to targeted groups.
  • Screening resumes for specific terms or patterns before any human review of applicant materials.
  • Analyzing facial expressions, word choice, and/or voice in online interviews.
  • Rank or prioritize applicants based on their schedule’s work availability.

The examples provided by the Council are illustrative and non-exhaustive. They aim to ensure that all forms of automated decision systems are scrutinized for fairness and compliance with anti-discrimination standards, reflecting a robust approach to modernizing employment practices.

Who Is Affected?

The proposed amendments apply to any organization that regularly pays five or more individuals for work or services. An employer’s “agent” and “employment agencies” are also covered.

The proposed amendments define “agent” as any person acting on behalf of an employer, directly or indirectly. This includes third parties providing services related to hiring or employment decisions, such as recruiting, applicant screening, hiring, payroll, benefits administration, evaluations, decision-making about workplace leaves or accommodations, or administering automated decision systems for these purposes.

The proposed amendments amend the definition of “employment agency” to mean any person providing compensated services to identify, screen, or procure job applicants, employees, and work opportunities. This includes those who offer these services through automated decision systems. This clarification specifies the range of actions covered by an employment agency and acknowledges the use of automated systems in providing these services, which is becoming more common.

Employer Impact

The proposed amendments clarify that it is unlawful for employers to use selection criteria, including automated decision systems if such use results in adverse impacts or disparate treatment based on characteristics protected under FEHA. Employers may be liable for discrimination stemming from these systems, just as they would be for decisions made without them. However, employers can defend their use of automated decision systems by demonstrating that the criteria were job-related, necessary for the business, and that no less discriminatory alternatives were available. Additionally, evidence that employers conducted anti-bias testing or took similar measures to prevent discrimination will be considered in their defense.

Employers are accountable for the actions of their agents, while agents themselves are liable for assisting or enabling any discriminatory employment practices resulting from the use of automated decision systems.

Consideration of Criminal History in Employment Decisions

The proposed amendments amend the FEHA clarifying the role of automated decision systems in consideration of an applicant’s criminal history. In particular:

  • § 11017.1(a): The Council proposes adding that employers using automated decision systems to consider criminal history must comply with the same amendments as human-based inquiries. Specifically, employers are prohibited from inquiring or assessing an applicant’s criminal history until after a conditional job offer has been extended. This ensures that automated systems are not used unlawfully unless specific exceptions apply.
  • § 11017.1(d)(2)(C): If an employer may withdraw a job offer based on criminal history as part of its initial assessment when using an automated decision system, they must provide the applicant with the report or data generated by the system, along with information on the assessment criteria used.
  • § 11017.1(d)(4): Using an automated decision system alone does not qualify as an individualized assessment of an applicant’s criminal history. Employers must conduct additional human-based processes to determine if a conviction directly relates to the job.

Other Clarifications of Law and Additional Requirements

The proposed provisions aim to prohibit the use of automated decision systems that result in disparate treatment based on various protected characteristics. Specifically, these amendments seek to prevent discrimination arising from the use of such systems concerning sex, pregnancy, childbirth or related medical conditions, marital status, religious creed, disability, and age.

Notable are the Council’s proposed amendments concerning automated decision systems when used to conduct medical and psychological examinations. The Council identifies that administering personality-based questions through automated decision systems, such as inquiries into optimism, emotional stability, or extroversion, can constitute prohibited medical inquiries. Similarly, using gamified screens within these systems to evaluate physical or mental abilities, like tests requiring rapid clicking or reaction time, may also infringe upon FEHA amendments. Such examinations, if not directly related to job requirements or lacking reasonable accommodations for individuals with disabilities, could be deemed unlawful.

Record Keeping Obligations

The proposed inclusion of records pertaining to automated decision systems extends beyond traditional documentation to encompass data used in the training, operation, and outputs of such systems. By explicitly stating that these records must be retained for a minimum of four years following the last use of the automated decision system, the amendment establishes a clear and enforceable standard for recordkeeping practices.

Additionally, the amendment addresses the accountability of entities involved in the provision, sale, or use of automated decision systems on behalf of covered entities. The proposal promotes transparency and accountability of all covered entities throughout the employment process by stipulating that these parties must maintain relevant records, including but not limited to automated decision system data.

Next Steps

The Civil Rights Council urges the public to engage in the regulatory process by submitting written comments by 5:00 PM PT on July 18, 2024. Comments can be sent via email to [email protected]. A public hearing on the proposed regulations is scheduled for 10:00 AM PT on July 18, 2024. For additional information on how to contribute to the discussion and participate in the hearing, the public is encouraged to review CRC’s Notice of Proposed Rulemaking.

Parting Thoughts

The proposed amendments extend the reach of the Fair Employment and Housing Act (FEHA) to encompass automated decision systems. By aligning these technologies with existing legal obligations, California expands the scope of protections against discrimination in employment, adapting regulations to the digital era. Employers are encouraged to submit public comments and actively monitor the Council’s website for developments in this regulatory process.