
The Artificial Intelligence and Machine Learning (“AI/ML”) risk environment is in flux. One reason is that regulators are shifting from AI safety to AI innovation approaches, as a recent DataPhiles post examined. Another is that the privacy and cybersecurity risks such technologies pose, which this post refers to as adversarial machine learning (“AML”) risk, differ from those posed by pre-AI/ML technologies, especially considering advances in agentic AI. That newness means that courts, legislatures, and regulators are unlikely to have experience with such risk, creating the type of unknown unknowns that keep compliance departments up at night.
This post addresses that uncertainty by examining illustrative adversarial machine learning attacks from the National Institute of Standards and Technology AML taxonomy and explaining why known attacks create novel legal risk. It further explains why existing technical solutions need to be supplemented by legal risk reduction strategies. Such strategies include asking targeted questions in diligence contexts, risk-shifting contractual provisions and ensuring that AI policies address AML. Each can help organizations clarify and reduce the legal uncertainty AML threats create.Continue Reading Adversarial Machine Learning in Focus: Novel Risks, Straightforward Legal Approaches