On this episode of the R&G Tech Studio podcast, Ropes & Gray partners and co-leaders of the firm’s AI initiative, Megan Baca and Ed McNicholas, delve into the key implications of President Trump’s new AI Executive Order 14179, contrasting it with the previous Biden administration’s approach to AI regulation. They explore the nuances of AI

The Artificial Intelligence and Machine Learning (“AI/ML”) risk environment is in flux. One reason is that regulators are shifting from AI safety to AI innovation approaches, as a recent DataPhiles post examined. Another is that the privacy and cybersecurity risks such technologies pose, which this post refers to as adversarial machine learning (“AML”) risk, differ from those posed by pre-AI/ML technologies, especially considering advances in agentic AI. That newness means that courts, legislatures, and regulators are unlikely to have experience with such risk, creating the type of unknown unknowns that keep compliance departments up at night.

This post addresses that uncertainty by examining illustrative adversarial machine learning attacks from the National Institute of Standards and Technology AML taxonomy and explaining why known attacks create novel legal risk. It further explains why existing technical solutions need to be supplemented by legal risk reduction strategies. Such strategies include asking targeted questions in diligence contexts, risk-shifting contractual provisions and ensuring that AI policies address AML. Each can help organizations clarify and reduce the legal uncertainty AML threats create.Continue Reading Adversarial Machine Learning in Focus: Novel Risks, Straightforward Legal Approaches

The Trump Administration’s recent AI pronouncements decry “ideological bias or engineered social agendas” as antithetical to continued American AI leadership. Executive Order 14179, repealing prior Biden Administration Executive Order 14110 on AI safety, reflects that theme and so does Vice President Vance’s speech at the February 11 Paris AI summit. “We feel very strongly,” Vance remarked, “that AI must remain free from ideological bias.” The Trump Administration’s view appears to be that overzealous regulation, likely including nondiscrimination, safety, and transparency regulation, puts American AI development at a disadvantage. The release of DeepSeek undoubtedly reinforces such concerns. As White House Press Secretary Karoline Leavitt put it, “[DeepSeek] is a wake-up call to the American AI industry.”Continue Reading Trump’s New AI Executive Order: Navigating the Conflicting Poles of AI Regulation

While students are about to embark on their holiday break, there is no such luck for educational technology (“EdTech”) providers. Privacy, cybersecurity, and artificial intelligence compliance obligations have proliferated over the past year, with no signs of slowing down. While it is hard to keep track of the numerous regulations and proposals on the state and federal level, below, I have highlighted a few issues for EdTech providers to monitor in the coming year.Continue Reading No Holiday Break for EdTech Compliance

On 30 November 2022, OpenAI made its ChatGPT generative artificial intelligence chatbot publicly available. In the two years since, its unprecedented growth has fostered a dramatic shift in public attention to and interest in all forms of AI. Now, the possibilities and risks presented by the continued development of AI are also firmly at the top of mind for businesses and regulators across the world.Continue Reading New Year’s Resolutions: What 2025 Holds for AI Regulation

The National Institute of Standards and Technology (NIST) has been a leading voice in cybersecurity standards since 2013, when President Obama’s Executive Order on Improving Critical Infrastructure Cybersecurity tasked NIST, which is embedded within the Department of Commerce, with developing and updating a cybersecurity framework for reducing cyber risks to critical infrastructure. The first iteration of that framework was released in 2014, and Versions 1.1 and 2.0 followed in 2018 and 2024. NIST guidance has also expanded to include a privacy framework, released in 2020, and an AI risk management framework, released in 2023. This year, NIST made updates to both its cybersecurity and AI risk management frameworks and created a holistic data governance model that aims to provide a comprehensive approach for entities to address issues like data quality, privacy, security, and compliance, leveraging the various NIST frameworks under a unified data governance structure to help framework users address broader organizational risks. A retrospective of these developments and predictions for 2025 are detailed in this post.Continue Reading A Very Merry NISTmas: 2024 Updates to the Cybersecurity and AI Framework

Rohan Massey and Edward Machin, partner and counsel in Ropes & Gray’s data, privacy & cybersecurity practice will be hosting a webinar on The EU AI Act – The Road to Compliance. The EU AI Act entered into force on August 1st, 2024. The Act is the first piece of comprehensive legislation to

On 12 July 2024, the EU AI Act (“AI Act”) was published in the Official Journal of the European Union. As the AI Act will enter into force 20 days from the date of its publication (1 August 2024), this starts the clock for organisations within the scope of the AI Act to prepare for compliance. 

The exact amount of time organisations have to comply with their relevant provisions under the AI Act will depend on the role they play under the AI Act, as well as the risk and capabilities of their AI systems. For example, providers[1] of general-purpose AI systems will be required to comply with the requirements of the AI Act before providers of high-risk AI systems. Continue Reading EU AI Act Published in the Official Journal of the European Union; Clock Starts for Compliance