Artificial Intelligence (AI), including machine learning and other AI-based tools, can be effective ways to sort large amounts of data and make uniform decisions. The value of such tools has been embraced by some employers as an efficient way to address current increased hiring needs in the current job market. The use of artificial intelligence () as an aid to employers in making employment decisions—e.g., recruitment, resume screening, or promotions—has been on the radar of lawmakers and regulators in recent years, particularly out of concern for the risk that these tools may mask or entrench existing discriminatory hiring practices or create new ones. For example, some workers have filed charges with the Equal Employment Opportunity Commission (EEOC) based on alleged discrimination that resulted from employers’ use of AI tools, leading the EEOC to establish an internal working group in October 2021 to study the use of AI for employment decisions. Elsewhere, a bill addressing the discriminatory use of AI was proposed in Washington, DC in late 2021, and Illinois enacted one of the first U.S. laws directly regulating the use of AI in employment-related video interviews in 2019. In contrast, a bill proposed in California in 2020 suggested that AI could be used in employment to help prevent bias and discrimination.

On November 10, 2021, the New York City Council passed the latest such bill, which places new restrictions on New York City employers’ use of AI and other automated tools in making decisions on hiring and promotions. The measure—which takes effect on January 2, 2023—regulates the use of “automated employment decision tools” (AEDTs) which it defines as computational processes “derived from machine learning, statistical modeling, data analytics, or artificial intelligence” that issue a “simplified output” to “substantially assist or replace” decision-making on employment decisions (i.e., hiring new candidates or promoting employees). Under the new law, employers and employment agencies are barred from using AEDTs to screen candidates unless certain prerequisites are met. First, the AEDT must be subject to a bias audit within the last year. Second, a summary of the results of the most recent audit, as well as the distribution date of the AEDT, must be made publicly available on the employer’s or employment agency’s website. The law describes this “bias audit” as “an impartial evaluation by an independent auditor” which “shall include, but not be limited to” assessing the AEDT’s “disparate impact on persons” based on race, ethnicity, and sex.

Continue Reading NYC Law Aims To Reduce Bias Introduced by AI in Employment Decisions

Private funds that are excluded from the definition of “investment company” under sections 3(c)(1) or 3(c)(7) of the Investment Company Act of 1940 (“ICA”) will face significantly stricter cybersecurity requirements under the FTC’s revised Safeguards Rule, which comes into full effect as of December 9, 2022. The FTC’s updated Safeguards Rule breaks new ground for the FTC by requiring specific security controls and accountability measures for consumer information expressly modeled on the New York Department of Financial Services’ (“NY DFS”) cybersecurity rule. For private fund entities covered by the Safeguards Rule, these changes will require prompt review, since many of the newly required controls will take time to implement. Among other things, the Safeguards Rule will now require multifactor authentication for any individual accessing information systems that store customer information (or compensating controls), encryption of all customer information both in transit and at rest (again with the option of alternative compensating controls), and updates to record retention procedures for customer information.

Continue Reading

As 2021 comes to a close, so does our 12 Days of Data series, but we will see you on the other side in 2022 with more posts on the top privacy and data protection issues. 2021 was an interesting year. While vaccinations spread and some sense of normalcy started to return, new strains of COVID-19 led to additional waves of shutdowns that stalled many of the debates. In 2022, we anticipate that the move toward a new normal will continue, and we will once again start to see traction on some of these data, privacy, and cybersecurity issues. As a preview, here are some of the key areas where we expect to see potential developments in 2022.

Continue Reading Closing out the 12 Days of Data: What to Expect in 2022

The onset of the COVID-19 pandemic in 2020 shuttered daycare centers, shifted schools to virtual settings, and fueled the rapid growth of children’s applications and educational technology (“ed-tech”) to facilitate the shelter-in-place childcare and remote learning paradigms. The federal Children’s Online Privacy Protection Act (COPPA) and Family Educational Rights and Privacy Act (FERPA), as well as numerous state laws protect children’s and students’ privacy when using these platforms. In 2021, increased scrutiny of the data collection practices of these platforms has followed their rapid deployment, as new variants led to renewed restrictions on in-person education and childcare. That scrutiny is likely to continue in the new year, as the use of such platforms persists, even as the pandemic subsides. In this post, we survey the developments during 2021 and assess the future of child and student privacy in 2022.

Continue Reading Trends in Child and Student Privacy

If 2021 is any indication, the Federal Trade Commission (FTC) shows no signs of slowing down in its pursuit of enforcement actions to address a wide variety of alleged privacy and cybersecurity issues. Under the leadership of new chair, Lina Khan, the past year has seen the FTC engage is a variety of new and expanding enforcement actions exhibiting an increasing interest in regulating data privacy and security, as well as other consumer protection areas.

While the FTC has become the de facto regulator for entities that are not subject to other sector-specific regulations, the Commission’s assertion of authority over privacy and cybersecurity matters is limited by its statutory powers under section 5 of the FTC Act, which prohibits “unfair or deceptive acts or practices” that injure consumers. The FTC’s expansion of that authority to cover privacy and cybersecurity matters has only grown more aggressive in recent years but has also become the subject of close judicial review. Notably, in 2018, the Eleventh Circuit ruled, in LabMD, Inc. v. FTC, that the FTC did not have unlimited authority to dictate the details of companies’ privacy and cybersecurity protections. Earlier this year, the Supreme Court, in AMG Capital Mgmt., LLC v. FTC, held that Section 13(b) of the FTC Act does not allow the FTC to obtain monetary relief in federal court. The FTC has asked Congress to use its authority to remedy this ability, and claims that this constitutes a loss of its “best and most efficient tool for returning money to consumers who suffered losses as a result of deceptive, unfair, or anticompetitive conduct.”

The FTC has pushed for a more expansive view of its authority for several years, and this has only intensified over the last year. Even before the AMG decision, the FTC had been advocating for Congress to address the gap in Section 13(b), which only explicitly provides for the FTC’s ability to order injunctive relief and is silent on monetary relief. While waiting on Congress to address the issue, we expect for the FTC to continue to bring enforcement actions and order restitution and disgorgement via their Section 19 authority, which provides for these types of relief, but only after a final cease-and-desist order, which can be challenged and is subject to review of appellate courts.

Continue Reading FTC Signals Increased Focus on Privacy and Data Misuse

As ransomware attacks continue to proliferate, organizations are facing increasingly complex practical and legal considerations. Ransomware threats can range from simple Ransomware-as-a-Service models to sophisticated attacks with network-wide impacts. In many cases, ransomware attacks involve not only encryption but also data exfiltration with accompanying regulatory and contractual notification obligations. Ransomware attacks are now so pervasive that they were deemed “a direct threat to our economy” by a Treasury Department Press Release. The resulting governmental focus on ransomware will create new and evolving regulatory challenges for organizations experiencing an attack.

Ransomware in 2021

If 2020 initiated a new era of ransomware threat due to pandemic-related shifts to remote work and the associated security risks, 2021 proved that this threat is only likely to increase in 2022, as the toxic mix of host nations accommodating ransomware gangs, the widespread ability of businesses to pay ransomware under insurance policies, the decreasing technical barriers to entry for attackers, and the ready availability of often untraceable cryptocurrency all remain strong. High-profile ransomware attacks in 2021 included the Colonial Pipeline attack, which interrupted gas supplies along the East Coast of the United States and the attack on JBS Food, one of the world’s largest meat producers, which caused panic buying by some consumers. As with other cybersecurity threats, supply chains were also exploited, with the REvil ransomware gang leveraging unauthorized access to Kaseya’s IT administrator software infrastructure to push out a fake software update containing ransomware. In that instance, the FBI was able to provide some assistance by obtaining encryption keys, but victims of future attacks may not be so fortunate.

Continue Reading Ransomware Threat Continues to Explode with New Legal and Regulatory Risks

A pair of government contract-related initiatives may mark a new path for federal cybersecurity efforts.  Past federal initiatives have attempted to use the enormous leverage of federal contract spending to incentivize contractors to protect governmental data, but 2021 saw the Biden Administration launch a significant two-pronged attack on the issue through a new Executive Order and a new civil fraud initiative at the Department of Justice.

Significantly, the Biden Administration’s approach of using an Executive Order to mandate cybersecurity requirements for government contractors and their vendors will affect a large portion of the U.S. economy, without the need for congressional action.  While an Executive Order cannot dictate cybersecurity measures for private companies, the Order does require stricter software security standards for vendors and publication of enhanced National Institute of Standards and Technology (NIST) guidelines that address supply chain security. These provisions would require all vendors who provide services to meet these standards before they could contract with federal agencies.

Continue Reading How FAR Can Raise the Cybersecurity Bar

In 2021, the U.S. Security and Exchange Commission (SEC) continued to stake its claim as a lead regulator for cybersecurity. Going into 2022, we expect the SEC will continue to aggressively scrutinize and pursue enforcement actions related to cybersecurity disclosures by public companies and cybersecurity practices of SEC-regulated entities like broker-dealers and investment advisers.  Moreover, Chair Gensler has announced that the SEC is currently working on a proposal for clearer cybersecurity governance rules, including topics such as “cyber hygiene and incident reporting.”

In many cases, the alleged faults that the SEC has found in the cybersecurity disclosures and practices of these entities go beyond the requirements of any other state or federal cybersecurity regulations. By making itself a leader in its expectations from regulated businesses, the SEC may become the agency that sets industry standard guidance for cybersecurity risk through the SEC mandates formed during its investigations and enforcement actions.

Continue Reading The Future of SEC Cybersecurity Enforcement

It has been eight months since the Supreme Court of the United States decided, in Facebook v. Duguid, that the federal Telephone Consumer Protection Act’s (TCPA) outdated definition of an automated telephone dialing system (ATDS or autodialer) did not cover devices—like most modern phones—which can store numbers that are not randomized. This decision resolved a long-standing circuit split over how to interpret the TCPA, but it has not led to the clarity that many companies desired.

While courts have started applying the narrowed ATDS definition under Duguid, companies engaged in telemarketing are not yet in the clear as many had initially thought in the immediate aftermath of Duguid. A number of trends have emerged that give new teeth to TCPA-like claims, including a spike in cases at the state level, novel legal theories, and a focus on other aspects of the TCPA. Moving into 2022, we expect a continued evolution in complaints brought under state telemarketing laws, and we might also see legislation or FCC guidance intended to update the TCPA so that it applies to modern dialing technologies.

Continue Reading The TCPA, State Analogues, and the Future of Telemarketing Litigation

As 2021 comes to a close, it is a great time to take stock of the present state of affairs with respect to U.S. privacy laws. With the relatively recent passage of comprehensive privacy laws in California, and additional countries adopting laws that closely follow the principles of the EU’s General Data Protection Regulation (GDPR), along with increasing public concerns regarding how companies manage customers’ personal data, legal practitioners entered 2021 with high hopes that comprehensive federal privacy legislation may finally be on the horizon. Nevertheless, in a trend that is likely to continue in the year ahead, it was the states rather than federal legislatures that successfully added to the ranks of privacy laws with which businesses will soon need to comply.

Continue Reading Momentum Builds for State Privacy Laws but the Possibility of a Federal Law Remains Remote