Photo of Matthew Cin

At a meeting of the California Privacy Protection Agency (“CPPA”) on June 8, we learned additional information about the initial batch of proposed regulations (“Proposed Regulations”) to the California Privacy Rights Act (“CPRA”) that were published on May 27. The Proposed Regulations keep much of the pre-existing California Consumer Privacy Act (“CCPA”) regulations but modify and add some key provisions. Because the CPRA was drafted as an amendment to the CCPA, the Proposed Regulations reference the CCPA (as amended by the CPRA). The Proposed Regulations focus on data subject rights, contractual requirements, and obligations related to disclosures, notices, and consents. Additional proposals will cover cybersecurity audits, privacy risk assessments, and automated decision making, among other areas. While we expect significant changes as the Proposed Regulations proceed through the formal rulemaking process, which the CPPA has not yet officially started, we provide our key takeaways below:

Continue Reading Recent Activity from the California Privacy Protection Agency

In a unanimous decision issued on February 3, 2022, the Illinois Supreme Court held in McDonald v. Symphony Bronzeville Park that the Illinois State Workers’ Compensation Act (“WCA”) did not bar claims under the Illinois’ Biometric Information Privacy Act (“BIPA”). In doing so, the court eliminated one significant defense commonly raised in such cases, since many BIPA class actions are brought in the context of employment (many of which were stayed pending the decision in McDonald). Critically, though, the decision does not preclude other potential defenses including claims of federal preemption.

BIPA is one of the most actively litigated privacy statutes in the United States. Among other things, it requires that businesses obtain consent prior to collecting biometric information (fingerprints, facial geometry information, iris scans and the like), issue a publicly available data retention policy, and refrain from certain data sales and disclosures. Because BIPA provides for a private right of action along with statutory damages of $1,000 to $5,000 per violation, it has proved fertile ground for the plaintiff’s bar.

Continue Reading Illinois Supreme Court Finds Illinois Biometric Information Privacy Act Not Preempted By State Workers’ Compensation Law

Artificial Intelligence (AI), including machine learning and other AI-based tools, can be effective ways to sort large amounts of data and make uniform decisions. The value of such tools has been embraced by some employers as an efficient way to address current increased hiring needs in the current job market. The use of artificial intelligence () as an aid to employers in making employment decisions—e.g., recruitment, resume screening, or promotions—has been on the radar of lawmakers and regulators in recent years, particularly out of concern for the risk that these tools may mask or entrench existing discriminatory hiring practices or create new ones. For example, some workers have filed charges with the Equal Employment Opportunity Commission (EEOC) based on alleged discrimination that resulted from employers’ use of AI tools, leading the EEOC to establish an internal working group in October 2021 to study the use of AI for employment decisions. Elsewhere, a bill addressing the discriminatory use of AI was proposed in Washington, DC in late 2021, and Illinois enacted one of the first U.S. laws directly regulating the use of AI in employment-related video interviews in 2019. In contrast, a bill proposed in California in 2020 suggested that AI could be used in employment to help prevent bias and discrimination.

On November 10, 2021, the New York City Council passed the latest such bill, which places new restrictions on New York City employers’ use of AI and other automated tools in making decisions on hiring and promotions. The measure—which takes effect on January 2, 2023—regulates the use of “automated employment decision tools” (AEDTs) which it defines as computational processes “derived from machine learning, statistical modeling, data analytics, or artificial intelligence” that issue a “simplified output” to “substantially assist or replace” decision-making on employment decisions (i.e., hiring new candidates or promoting employees). Under the new law, employers and employment agencies are barred from using AEDTs to screen candidates unless certain prerequisites are met. First, the AEDT must be subject to a bias audit within the last year. Second, a summary of the results of the most recent audit, as well as the distribution date of the AEDT, must be made publicly available on the employer’s or employment agency’s website. The law describes this “bias audit” as “an impartial evaluation by an independent auditor” which “shall include, but not be limited to” assessing the AEDT’s “disparate impact on persons” based on race, ethnicity, and sex.

Continue Reading NYC Law Aims To Reduce Bias Introduced by AI in Employment Decisions

It has been eight months since the Supreme Court of the United States decided, in Facebook v. Duguid, that the federal Telephone Consumer Protection Act’s (TCPA) outdated definition of an automated telephone dialing system (ATDS or autodialer) did not cover devices—like most modern phones—which can store numbers that are not randomized. This decision resolved a long-standing circuit split over how to interpret the TCPA, but it has not led to the clarity that many companies desired.

While courts have started applying the narrowed ATDS definition under Duguid, companies engaged in telemarketing are not yet in the clear as many had initially thought in the immediate aftermath of Duguid. A number of trends have emerged that give new teeth to TCPA-like claims, including a spike in cases at the state level, novel legal theories, and a focus on other aspects of the TCPA. Moving into 2022, we expect a continued evolution in complaints brought under state telemarketing laws, and we might also see legislation or FCC guidance intended to update the TCPA so that it applies to modern dialing technologies.

Continue Reading The TCPA, State Analogues, and the Future of Telemarketing Litigation

Recognizing the persistent and increasingly sophisticated nature of cyber incidents threatening the safety and security of the U.S., the Biden administration is launching a new bureau focused on cybersecurity and digital policy. On October 27, 2021, Secretary of State Antony Blinken formally announced a plan to establish a Bureau of Cyberspace and Digital Policy, which includes appointing a special envoy to address critical and emerging technologies. The new bureau and special envoy will address issues such as cyber threats, digital freedom, and surveillance risks, and will coordinate with the U.S.’s allies to establish international standards on emerging technologies.

Continue Reading State Department Makes Cybersecurity a Priority

Modern smartphones, wearables and internet-enabled devices are capable of monitoring heart rate, blood oxygen levels, steps taken, prescription adherence, and other vital health-related activities. Contrary to popular belief, HIPAA does not cover many of these applications and devices. On September 15, 2021, the Federal Trade Commission issued a Policy Statement attempting to assert authority to police that gap.  The Policy Statement explains the FTC’s view that the Health Breach Notification Rule applies to mobile health applications. This Policy Statement signals increasing FTC scrutiny designed to safeguard sensitive health data on a variety of modern technologies that consumers use to monitor and improve their health.

Continue Reading FTC Warns Health Apps and Connected Device Companies to Comply With Health Breach Notification Rule