An increasingly aggressive plaintiffs’ bar has brought purported class action suits based on the nearly ubiquitous use of tracking technologies used for website analytics. Although any actual harm to the plaintiffs is difficult to articulate, the health care industry has been plagued by a series of these cases. Now the plaintiffs may be moving to financial services with the potential for statutory penalties of hundreds of dollars per user when a duty of confidentiality can be credibly implicated. 

The tracking tags, pixels and similar website analytics technologies are nothing new. Rather, the technologies at issue in such complaints are widely used on websites and mobile applications across industries, including by government entities, to collect information about user behaviors and interactions with the online platform where they are embedded. That information is then sent to a third party for analytics used to enhance user experience on the platform. Many of these technologies are integral to an organization’s ability to ensure its websites and applications are functioning properly, among other things providing crash reports when users encounter issues. Additionally, many consumer-facing businesses contract with third parties to provide session replay scripts, a software that monitors and records web-user activity such as keystrokes, clicks, and scrolling.  Despite the pervasiveness of these technologies, plaintiffs have seized on ambiguities in the California state wiretap act, known as the California Information Privacy Act, as well as federal wiretap law as the basis for exceptionally large damage demands.

Continue Reading Pixel Litigation Risk at Financial Institutions

On this episode of the R&G Tech Studio podcast, Ropes & Gray partners and co-leaders of the firm’s AI initiative, Megan Baca and Ed McNicholas, delve into the key implications of President Trump’s new AI Executive Order 14179, contrasting it with the previous Biden administration’s approach to AI regulation. They explore the nuances of AI innovation versus AI safety, the potential conflicts between federal and state regulations, and the global landscape of AI governance. Tune in for an insightful conversation on how companies can navigate the evolving regulatory environment while balancing innovation and compliance.
Click here to listen.

The Artificial Intelligence and Machine Learning (“AI/ML”) risk environment is in flux. One reason is that regulators are shifting from AI safety to AI innovation approaches, as a recent DataPhiles post examined. Another is that the privacy and cybersecurity risks such technologies pose, which this post refers to as adversarial machine learning (“AML”) risk, differ from those posed by pre-AI/ML technologies, especially considering advances in agentic AI. That newness means that courts, legislatures, and regulators are unlikely to have experience with such risk, creating the type of unknown unknowns that keep compliance departments up at night.

This post addresses that uncertainty by examining illustrative adversarial machine learning attacks from the National Institute of Standards and Technology AML taxonomy and explaining why known attacks create novel legal risk. It further explains why existing technical solutions need to be supplemented by legal risk reduction strategies. Such strategies include asking targeted questions in diligence contexts, risk-shifting contractual provisions and ensuring that AI policies address AML. Each can help organizations clarify and reduce the legal uncertainty AML threats create.

Continue Reading Adversarial Machine Learning in Focus: Novel Risks, Straightforward Legal Approaches

On March 7, 2025, the Department of Homeland Security (“DHS,” “the agency”) disbanded the Critical Infrastructure Partnership Advisory Council (“CIPAC,” “the Council”), originally established in 2006 to facilitate communication between the public and private sectors on critical infrastructure issues. CIPAC’s termination comes against the backdrop of the 2015 Cybersecurity Information Sharing Act’s (“CISA 2015,” “the Act”) upcoming expiration on September 30, 2025. CIPAC and CISA 2015 have jointly provided a valuable legal and operational framework for sharing information between the public and private sector in the U.S. for the past decade. Financial services industry stakeholders and members of Congress have expressed concern in recent months over increased cyber threats to industry stakeholders should the current public-private information sharing framework deteriorate. These recent developments are poised to significantly impact the financial services industry’s cybersecurity landscape – absent steps by Congress and the Administration to provide continuity for the current framework. 

Continue Reading CIPAC Disbandment and CISA 2015 Reauthorization: Recent Developments in the U.S. Cybersecurity Landscape

On this episode of the R&G Tech Studio podcast, Rohan Massey, a leader of Ropes & Gray’s data, privacy and cybersecurity practice, is joined by data, privacy and cybersecurity counsel Edward Machin to discuss the AI literacy measures of the EU AI Act and how companies can meet its requirements to ensure their teams are adequately AI literate. The conversation delves into the broad definition of AI systems under the EU AI Act, the importance of AI literacy for providers and deployers of AI systems, and the context-specific nature of AI literacy requirements. They also provide insights into the steps organizations should take to understand their roles under the AI Act, develop training modules, and implement policies and procedures to comply with AI literacy principles.

Click here to listen.

Ropes & Gray’s health care partner, David Peloquin, spoke with Bloomberg Law on the additional DOJ instructions regarding the Biden-era Executive Order 14117. DOJ has provided clarity surrounding the effective date for enforcement, with a promise to delay any enforcement efforts until July 8 for companies that show “good faith efforts to comply.” David noted that this guidance “allows compliance officers, privacy officers, those within companies working on this, to really get the resources they need.” To read the full article click here, and to read the Ropes & Gray client alert detailing the additional guidance, click here.

On April 11, 2025, the Department of Justice (“DOJ”) released additional detail regarding the Final Rule implementing former President Biden’s Executive Order 14117, “Preventing Access to Americans’ Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern” (the “Final Rule”), which went into effect on April 8, 2025. The release included additional guidance, frequently asked questions, and an enforcement policy for the first 90 days. Much of the material re-articulated language in the Final Rule, but the release did include some notable new information for organizations assessing their compliance, key points of which we summarize below.

Earlier this year, Ropes & Gray published an Alert providing an overview of the Final Rule, material changes from the DOJ’s Notice of Proposed Rulemaking (“NPRM”), and guidance on steps organizations should take to come into compliance. (Ropes & Gray also published Alerts on the NPRM and the Advance Notice of Proposed Rulemaking).

To read the full Ropes & Gray client alert, click here.

Today, the Department of Justice’s (“DOJ”) Final Rule implementing former President Biden’s Executive Order 14117, “Preventing Access to Americans’ Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern” (the “Final Rule”) took effect.

Earlier this year, Ropes & Gray published an alert providing an overview of the Final Rule, material changes from the DOJ’s Notice of Proposed Rulemaking (“NPRM”), and guidance on steps organizations should take to come into compliance. (Ropes & Gray also published alerts on the NPRM and the Advance Notice of Proposed Rulemaking).

If they haven’t already, organizations should evaluate their obligations under the Final Rule and make compliance changes accordingly.

In an International Association of Privacy Professionals (IAPP) article, health care partner David Peloquin and data, privacy and cybersecurity associate Jake Barr along with Legend Biotech Chief Privacy Officer and Assistant General Counsel Corey Dennis discuss the landmark rule limiting sensitive data transfers to “countries of concern.” The article reviews key aspects for health care and life sciences companies, key exemptions, and best practices to ensure compliance. To read the full IAPP article click here.

The Trump Administration’s recent AI pronouncements decry “ideological bias or engineered social agendas” as antithetical to continued American AI leadership. Executive Order 14179, repealing prior Biden Administration Executive Order 14110 on AI safety, reflects that theme and so does Vice President Vance’s speech at the February 11 Paris AI summit. “We feel very strongly,” Vance remarked, “that AI must remain free from ideological bias.” The Trump Administration’s view appears to be that overzealous regulation, likely including nondiscrimination, safety, and transparency regulation, puts American AI development at a disadvantage. The release of DeepSeek undoubtedly reinforces such concerns. As White House Press Secretary Karoline Leavitt put it, “[DeepSeek] is a wake-up call to the American AI industry.”

Continue Reading Trump’s New AI Executive Order: Navigating the Conflicting Poles of AI Regulation