EEOC Issues Update Relating to Artificial Intelligence

By Alexandra “Sasha” Chepov

In recent years, employers have adopted a wide variety of algorithmic decision-making tools to assist them in making employment decisions such as recruitment, hiring, retention, promotion, transfer, performance monitoring, demotion, dismissal and referral. These tools have been increasingly utilized by employers in an attempt to save time and effort, increase objectivity, optimize employee performance and decrease bias.

On May 18, 2023, the EEOC issued guidance clarifying the potential risks employers may face if the artificial intelligence tool being used results in an adverse discriminatory impact under Title VII of the Civil Rights Act of 1964 (“Title VII”). The purpose of the EEOC’s publication is to ensure that the use of new technologies complies with federal EEO law by educating employers, employees and other stakeholders about the application of these laws to the use of software and automated systems in employment decisions.

Definitions:

  • Software: Refers to information technology programs or procedures that provide instructions to a computer on how to perform a given task or function.
    • Many different types of software and applications are used in employment, including automatic resume-screening software, hiring software, chatbot software for hiring and workflow, video interviewing software, analytics software, employee monitoring software, and worker management software.
  • Algorithm: An “algorithm” is generally a set of instructions that can be followed by a computer to accomplish some end. Human resources software and applications use algorithms to allow employers to process data to evaluate, rate, and make other decisions about job applicants and employees. Software or applications that include algorithmic decision-making tools are used at various stages of employment, including hiring, performance evaluation, promotion and termination.
  • Artificial Intelligence (“AI”): Some employers and software vendors use AI when developing algorithms that help employers evaluate, rate and make other decisions about job applicants and employees. Congress has defined “AI” to mean a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.

Employers sometimes rely on various software platforms that incorporate algorithmic decision-making at a number of stages throughout the employment process. For example, resume scanners may be used to prioritize applicants using certain keywords; employee monitoring software may be used that rates employees on the basis of their keystrokes or other factors; “virtual assistant” or “chatbots” may be used to ask candidates about their qualifications and reject those who do not meet pre-defined requirements; video interviewing software may be used to evaluate candidates based on their facial expressions and speech patterns; and testing software that provides “job fit” scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived “cultural fit” based on their performances on a game or on a more traditional test.

Title VII:

Title VII generally prohibits employers from using neutral tests or selection procedures that have the effect of disproportionately excluding persons based on race, color, religion, sex or national origin, if the test or selection procedures are not “job related for the position in question and consistent with business necessity.” This is called “disparate impact” or “adverse impact” discrimination.

If the use of an algorithmic decision-making tool has an adverse impact on individuals of a particular race, color, religion, sex, or national origin, or on individuals with a particular combination of such characteristics, then the use of the tool will violate Title VII unless the employer can show that such use is “job related and consistent with business necessity” pursuant to Title VII.

Employers that are deciding whether to rely on a software vendor to develop or administer an algorithmic decision-making tool should determine whether steps have been taken to evaluate whether the use of the tool causes a substantially lower selection rate of individuals with a characteristic protected by Title VII. A “selection rate” refers to the proportion of applicants or candidates who are hired, promoted or otherwise selected. A selection rate for a group of applicants or candidates is calculated by dividing the number of individuals hired, promoted or otherwise selected by the total number of candidates in the group.

As a general rule of thumb, the four-fifths rule is used to determine whether the selection rate for one group is “substantially” different than the selection rate of another group. The rule states that one rate is substantially different than another if their ratio is less than four-fifths (or 80%). However, the four-fifths rule may not be appropriate in certain circumstances. For example, smaller differences in selection rates may indicate adverse impact where a procedure is used in making a large number of selections, or where an employer’s actions have discouraged individuals from applying disproportionately on a Title VII-protected characteristic. In any event, the four-fifths rule may be used to draw an initial inference that the selection rates for two groups may be substantially different and prompt the employer to turn to additional information about the procedure or algorithm in question.

If an employer is in the process of developing a selection tool and discovers that use of the tool may result in an adverse impact on individuals of a particular protected characteristic by Title VII, an employer can take steps to reduce the discriminatory impact or select a different tool in order to avoid undertaking a practice that violates Title VII.  Failure to adopt a less discriminatory algorithm that was considered during the development process may give rise to liability.

ADA:

Although not discussed in the EEOC’s May 18, 2023 publication, the EEOC has previously issued technical guidance on the use of AI and discrimination in the workplace under the Americans with Disabilities Act (ADA). The most common ways that an employer’s use of algorithmic decision-making tools could violate the ADA are:

  • Failure to provide a “reasonable accommodation” that is necessary for a job applicant or employee to be rated fairly and accurately by the algorithm.
  • Rely on an algorithmic decision-making tool that intentionally or unintentionally “screens out” an individual with a disability, even though the individual is able to do the job with a reasonable accommodation.
  • Adopt an algorithmic decision-making tool for us with its job applicants or employees that violates the ADA’s restrictions on disability-related inquiries and medical examinations.

Why does this matter? 

Where an employer administers a pre-employment test, it may be liable for any resulting Title VII or ADA discrimination, even if the test was developed by an outside vendor. Similarly, an employer may be held liable for the actions of their agents, which may include entities such as software vendors, if the employer has given them authority to act on the employer’s behalf.

If the vendor states that the tool should be expected to result in a substantially lower selection rate of individuals of a particular characteristic protected by Title VII or the ADA, then the employer should consider whether the use of the tool is job related and consistent with business necessity and whether there are any alternatives that may be implemented that reduce the disparate impact, yet still satisfy the employer’s needs. Even where a vendor is incorrect about its own assessment, and the tool results in either disparate impact or disparate treatment discrimination, the employer could still be liable.

For that reason, employers are encouraged to conduct an ongoing self-analysis to determine if the technology they are using could result in discrimination in any way or whether their employment practices have disproportionately large negative effects on a basis prohibited under Title VII or treat protected groups differently.