Artificial Intelligence (AI), including machine learning and other AI-based tools, can be effective ways to sort large amounts of data and make uniform decisions. The value of such tools has been embraced by some employers as an efficient way to address current increased hiring needs in the current job market. The use of artificial intelligence () as an aid to employers in making employment decisions—e.g., recruitment, resume screening, or promotions—has been on the radar of lawmakers and regulators in recent years, particularly out of concern for the risk that these tools may mask or entrench existing discriminatory hiring practices or create new ones. For example, some workers have filed charges with the Equal Employment Opportunity Commission (EEOC) based on alleged discrimination that resulted from employers’ use of AI tools, leading the EEOC to establish an internal working group in October 2021 to study the use of AI for employment decisions. Elsewhere, a bill addressing the discriminatory use of AI was proposed in Washington, DC in late 2021, and Illinois enacted one of the first U.S. laws directly regulating the use of AI in employment-related video interviews in 2019. In contrast, a bill proposed in California in 2020 suggested that AI could be used in employment to help prevent bias and discrimination.

On November 10, 2021, the New York City Council passed the latest such bill, which places new restrictions on New York City employers’ use of AI and other automated tools in making decisions on hiring and promotions. The measure—which takes effect on January 2, 2023—regulates the use of “automated employment decision tools” (AEDTs) which it defines as computational processes “derived from machine learning, statistical modeling, data analytics, or artificial intelligence” that issue a “simplified output” to “substantially assist or replace” decision-making on employment decisions (i.e., hiring new candidates or promoting employees). Under the new law, employers and employment agencies are barred from using AEDTs to screen candidates unless certain prerequisites are met. First, the AEDT must be subject to a bias audit within the last year. Second, a summary of the results of the most recent audit, as well as the distribution date of the AEDT, must be made publicly available on the employer’s or employment agency’s website. The law describes this “bias audit” as “an impartial evaluation by an independent auditor” which “shall include, but not be limited to” assessing the AEDT’s “disparate impact on persons” based on race, ethnicity, and sex.

The law further provides that any employer (or employment agency) that uses an AEDT to screen employees or candidates residing in New York City must (a) notify such individuals that an AEDT will be used in connection with their evaluation at least ten business days before being used, (b) allow the individual to request an alternative accommodation, and (c) provide the individual the job qualifications and characteristics that the tool will use in its assessment, also at least ten days prior. Additionally, the employer must make available information about the types of data collected by the AEDT, the sources of such data, and the entity’s data retention policy, either on its website or directly to a candidate or employee within thirty days of any written request.

The law restricts enforcement to prosecution by the New York City Law Department, excluding a private right of action by any candidates or employees. It sets penalties of up to $500 for a first violation and $500 to $1,500 for each subsequent violation, specifically noting that each day in which the tool is used in violation of the law, as well as any failure to provide notice of the types set forth above, constitutes a separate violation.

At a high-level, while the new law imposes meaningful burdensome requirements on New York City employers using AI and other automated tools in their hiring and promotion decision-making processes, it leaves substantial ambiguity about how to comply with these new requirements. For example, other than the limited description of the bias audit noted above, the law provides no further guidance on what an audit should incorporates or who is supposed to conduct these audits? The law did not incorporate a test for measuring “disparate impact” but it could point to the EEOC’s framework used in Title VII jurisprudence for assessing “disparate impact” (i.e., more favorable treatment of similarly situated candidates/employees, or an intent to discriminate—as made apparent either by the algorithms or data sets used in the AEDT). Similarly, although the law specifically excludes standard office and IT tools like junk email filters, firewalls, calculators, and spreadsheets, the law fails to fully explicate the types of tools that it is intended to cover. In sum, the law’s requirements are not fully fleshed out, inadvertently implying that they may be easy to meet—a concern raised by commentators and stakeholders.

The law does, however, make a few things clear. First, the new requirements apply only to decisions on hiring and promotions; other employment-related decisions like compensation or scheduling are not covered. Similarly, the law is only concerned with disparate treatment as to race, ethnicity, and sex, so the bias audits are not required to examine any other factors like disability or age. Finally, the penalties involved could be substantial, while the law does not explicitly say whether multiple applications processed in the same day count as separate violations, penalties may well amount to hundreds of thousands of dollars or more if AEDTs are used on multiple applications over a period of days.

Even given the ambiguities that the new law presents, employers and employment agencies can follow certain best practices to comply with the law, including the following:

  • Use caution when deploying tools that may be covered: the law is intended to regulate tools that are meant to replace or largely subsume the human role in discretionary employment decisions, but it otherwise does not provide specific guidance as to coverage; nevertheless, given the law’s expansive approach, any automated, AI-driven processing could be within scope.
  • Document the datasets to which AEDTs have access (especially for sophisticated machine learning/AI tools that may incorporate new, unanticipated datasets over time as the tools develop through iterative processes) and any criteria/benchmarks/scoring systems an AEDT uses, as requests for information on the AEDTs may be made any time a promotion or job opportunity is posted.
  • Minimize AEDTs’ access to information from which the tools could construe a candidate’s or employee’s race, ethnicity, or gender.
  • Maintain alternative evaluation methods (including review by a human being), so that they can be deployed quickly in the event of a request for such alternatives by a candidate or employee.
  • Arrange for bias audits by an independent in advance of (a) any tool’s implementation in any employment decision (given the ten-day prior notice requirement) and (b) the yearly deadline, particularly in order to implement any correctives if bias is found.

With these measures in place, there is potential for AI-driven tools to benefit the employers who face the difficult task of sorting large amounts of data while also helping to avoid introduction of bias and discrimination in the workplace.