In 1950, reflecting on the future of machine intelligence, Alan Turing observed: “We can only see a short distance ahead, but we can see plenty there that needs to be done.” With several large language models, most notably OpenAI’s GPT-4.5, passing the Turing Test in 2025, some governments have taken steps towards stricter regulation this year, with others still working to determine what “needs to be done” for AI regulation in the year ahead.

Most notably, this year saw key provisions of the EU AI Act—the world’s first comprehensive AI-dedicated law—take effect. However, instead of seeing the “Brussels effect” with AI regulation, going into 2026, the global approach appears to be leaning towards that of the UK and U.S., which have led the charge for a looser regulatory environment in recent years.Continue Reading On the Eighth Day of Data… AI Regulation – A 2025 Recap and a Look Ahead to 2026

As 2025 draws to a close and some organizations slip into a quieter holiday rhythm, their AI systems continue humming in the background—summarizing customer inquiries, triaging security alerts, generating code, and synchronizing records across critical systems. Within that uninterrupted activity, however, lies a less festive truth: agentic AI introduces cyber risks of unprecedented complexity and novelty, beyond what conventional architectures were designed to manage.

Agentic AI—the class of systems that can reason, plan, act, and adapt toward goals with reduced human oversight—promises measurable gains across legal services, finance, healthcare, and supply chain operations. But the same autonomy that drives new efficiencies also creates a distinctly complex cybersecurity risk profile. By initiating actions, calling tools, exchanging data with other agents, and escalating privileges to meet objectives, autonomous systems expand the attack surface and introduce “digital insiders” that can err at scale, leak data silently, and even be co-opted by threat actors. For those advising on governance, cyber preparedness, and emerging-tech strategy, the takeaway is clear: companies need a practical, defensible program tailored to agentic environments—one that reduces the likelihood and blast radius of failures before a single misaligned step turns out all the lights.Continue Reading On the Fourth Day of Data… All is Calm, All is Bright? Securing Agentic AI Before the Lights Go Out

The publication of the EU Digital Omnibus Proposal (“Omnibus”) on 19 November set out a two-part package of simplifications to its data protection rulebook. Pitched as a means to reduce regulatory friction and foster innovation, the initiative represents the EU’s ambition to reap the benefits of the digital revolution.

Following the Draghi report’s warning that the EU was trailing behind US and Chinese markets due to overregulation, the EU has course corrected its approach to digital regulation, overhauling its flagship data legislation to strengthen its position in the global market. The Omnibus thus forms part of the Commission’s wider promise to reduce administrative burdens by at least 25% for all businesses—and at least 35% for small and medium-sized enterprises (“SMEs”)—by 2029.Continue Reading On the Third Day of Data… This Omnibus Is on a Diversion: Highlights of the EU’s Digital Omnibus Proposal

Following several unsuccessful attempts to secure federal preemption of state artificial intelligence regulations through Congress, President Trump turned to executive action, signing a sweeping executive order last Thursday night, entitled “Ensuring a National Policy Framework for Artificial Intelligence”. The Executive Order directs federal agencies to challenge state laws regulating AI, with the stated

As firms face rising data volumes, competitive pressure, and regulatory scrutiny, asset managers are increasingly turning to tools driven by artificial intelligence for everything from investment research and portfolio construction to risk modeling and operational efficiency.

In a recent whitepaper, Ropes & Gray partners Melissa Bender, Amy Jane Longo, Fran Faircloth, Megan

The Artificial Intelligence and Machine Learning (“AI/ML”) risk environment is in flux. One reason is that regulators are shifting from AI safety to AI innovation approaches, as a recent DataPhiles post examined. Another is that the privacy and cybersecurity risks such technologies pose, which this post refers to as adversarial machine learning (“AML”) risk, differ from those posed by pre-AI/ML technologies, especially considering advances in agentic AI. That newness means that courts, legislatures, and regulators are unlikely to have experience with such risk, creating the type of unknown unknowns that keep compliance departments up at night.

This post addresses that uncertainty by examining illustrative adversarial machine learning attacks from the National Institute of Standards and Technology AML taxonomy and explaining why known attacks create novel legal risk. It further explains why existing technical solutions need to be supplemented by legal risk reduction strategies. Such strategies include asking targeted questions in diligence contexts, risk-shifting contractual provisions and ensuring that AI policies address AML. Each can help organizations clarify and reduce the legal uncertainty AML threats create.Continue Reading Adversarial Machine Learning in Focus: Novel Risks, Straightforward Legal Approaches

The Trump Administration’s recent AI pronouncements decry “ideological bias or engineered social agendas” as antithetical to continued American AI leadership. Executive Order 14179, repealing prior Biden Administration Executive Order 14110 on AI safety, reflects that theme and so does Vice President Vance’s speech at the February 11 Paris AI summit. “We feel very strongly,” Vance remarked, “that AI must remain free from ideological bias.” The Trump Administration’s view appears to be that overzealous regulation, likely including nondiscrimination, safety, and transparency regulation, puts American AI development at a disadvantage. The release of DeepSeek undoubtedly reinforces such concerns. As White House Press Secretary Karoline Leavitt put it, “[DeepSeek] is a wake-up call to the American AI industry.”Continue Reading Trump’s New AI Executive Order: Navigating the Conflicting Poles of AI Regulation

While students are about to embark on their holiday break, there is no such luck for educational technology (“EdTech”) providers. Privacy, cybersecurity, and artificial intelligence compliance obligations have proliferated over the past year, with no signs of slowing down. While it is hard to keep track of the numerous regulations and proposals on the state and federal level, below, I have highlighted a few issues for EdTech providers to monitor in the coming year.Continue Reading No Holiday Break for EdTech Compliance