Artificial intelligence-enabled technology tools are capable of dissecting large quantities of data faster than ever before and in some cases, in real time. However, the increasingly widespread use of AI challenges regulators to balance the benefits of innovation while protecting patient safety, health and privacy rights. An Intellectual Property & Technology Law Journal article on
NYC Law Aims To Reduce Bias Introduced by AI in Employment Decisions
Artificial Intelligence (AI), including machine learning and other AI-based tools, can be effective ways to sort large amounts of data and make uniform decisions. The value of such tools has been embraced by some employers as an efficient way to address current increased hiring needs in the current job market. The use of artificial intelligence () as an aid to employers in making employment decisions—e.g., recruitment, resume screening, or promotions—has been on the radar of lawmakers and regulators in recent years, particularly out of concern for the risk that these tools may mask or entrench existing discriminatory hiring practices or create new ones. For example, some workers have filed charges with the Equal Employment Opportunity Commission (EEOC) based on alleged discrimination that resulted from employers’ use of AI tools, leading the EEOC to establish an internal working group in October 2021 to study the use of AI for employment decisions. Elsewhere, a bill addressing the discriminatory use of AI was proposed in Washington, DC in late 2021, and Illinois enacted one of the first U.S. laws directly regulating the use of AI in employment-related video interviews in 2019. In contrast, a bill proposed in California in 2020 suggested that AI could be used in employment to help prevent bias and discrimination.
On November 10, 2021, the New York City Council passed the latest such bill, which places new restrictions on New York City employers’ use of AI and other automated tools in making decisions on hiring and promotions. The measure—which takes effect on January 2, 2023—regulates the use of “automated employment decision tools” (AEDTs) which it defines as computational processes “derived from machine learning, statistical modeling, data analytics, or artificial intelligence” that issue a “simplified output” to “substantially assist or replace” decision-making on employment decisions (i.e., hiring new candidates or promoting employees). Under the new law, employers and employment agencies are barred from using AEDTs to screen candidates unless certain prerequisites are met. First, the AEDT must be subject to a bias audit within the last year. Second, a summary of the results of the most recent audit, as well as the distribution date of the AEDT, must be made publicly available on the employer’s or employment agency’s website. The law describes this “bias audit” as “an impartial evaluation by an independent auditor” which “shall include, but not be limited to” assessing the AEDT’s “disparate impact on persons” based on race, ethnicity, and sex.…
Continue Reading NYC Law Aims To Reduce Bias Introduced by AI in Employment Decisions
Closing out the 12 Days of Data: What to Expect in 2022
As 2021 comes to a close, so does our 12 Days of Data series, but we will see you on the other side in 2022 with more posts on the top privacy and data protection issues. 2021 was an interesting year. While vaccinations spread and some sense of normalcy started to return, new strains of COVID-19 led to additional waves of shutdowns that stalled many of the debates. In 2022, we anticipate that the move toward a new normal will continue, and we will once again start to see traction on some of these data, privacy, and cybersecurity issues. As a preview, here are some of the key areas where we expect to see potential developments in 2022.
Continue Reading Closing out the 12 Days of Data: What to Expect in 2022
FTC Signals Increased Focus on Privacy and Data Misuse
If 2021 is any indication, the Federal Trade Commission (FTC) shows no signs of slowing down in its pursuit of enforcement actions to address a wide variety of alleged privacy and cybersecurity issues. Under the leadership of new chair, Lina Khan, the past year has seen the FTC engage is a variety of new and expanding enforcement actions exhibiting an increasing interest in regulating data privacy and security, as well as other consumer protection areas.
While the FTC has become the de facto regulator for entities that are not subject to other sector-specific regulations, the Commission’s assertion of authority over privacy and cybersecurity matters is limited by its statutory powers under section 5 of the FTC Act, which prohibits “unfair or deceptive acts or practices” that injure consumers. The FTC’s expansion of that authority to cover privacy and cybersecurity matters has only grown more aggressive in recent years but has also become the subject of close judicial review. Notably, in 2018, the Eleventh Circuit ruled, in LabMD, Inc. v. FTC, that the FTC did not have unlimited authority to dictate the details of companies’ privacy and cybersecurity protections. Earlier this year, the Supreme Court, in AMG Capital Mgmt., LLC v. FTC, held that Section 13(b) of the FTC Act does not allow the FTC to obtain monetary relief in federal court. The FTC has asked Congress to use its authority to remedy this ability, and claims that this constitutes a loss of its “best and most efficient tool for returning money to consumers who suffered losses as a result of deceptive, unfair, or anticompetitive conduct.”
The FTC has pushed for a more expansive view of its authority for several years, and this has only intensified over the last year. Even before the AMG decision, the FTC had been advocating for Congress to address the gap in Section 13(b), which only explicitly provides for the FTC’s ability to order injunctive relief and is silent on monetary relief. While waiting on Congress to address the issue, we expect for the FTC to continue to bring enforcement actions and order restitution and disgorgement via their Section 19 authority, which provides for these types of relief, but only after a final cease-and-desist order, which can be challenged and is subject to review of appellate courts.…
Continue Reading FTC Signals Increased Focus on Privacy and Data Misuse
EU Proposals May Limit the Use of Artificial Intelligence
The European Commission (EC) may be set to propose extensive new legislation – potentially later this week – which, among other things, would ban the use of facial recognition technology for surveillance purposes and the use of algorithms that influence human behavior, according to recently leaked draft documents. The proposals would also introduce new rules regarding high-risk artificial intelligence (AI).
Although the use of AI systems is regarded as beneficial in many areas of society, use of AI in some contexts can be controversial. For example, the use of algorithms in the context of employment-related decision-making, allegedly based solely on automated personal data processing, including profiling, has recently been challenged under the GDPR in the Dutch courts, although this decision is likely to be contested.
Continue Reading EU Proposals May Limit the Use of Artificial Intelligence