Artificial Intelligence (AI)

While students are about to embark on their holiday break, there is no such luck for educational technology (“EdTech”) providers. Privacy, cybersecurity, and artificial intelligence compliance obligations have proliferated over the past year, with no signs of slowing down. While it is hard to keep track of the numerous regulations and proposals on the state and federal level, below, I have highlighted a few issues for EdTech providers to monitor in the coming year.Continue Reading No Holiday Break for EdTech Compliance

On 30 November 2022, OpenAI made its ChatGPT generative artificial intelligence chatbot publicly available. In the two years since, its unprecedented growth has fostered a dramatic shift in public attention to and interest in all forms of AI. Now, the possibilities and risks presented by the continued development of AI are also firmly at the top of mind for businesses and regulators across the world.Continue Reading New Year’s Resolutions: What 2025 Holds for AI Regulation

The National Institute of Standards and Technology (NIST) has been a leading voice in cybersecurity standards since 2013, when President Obama’s Executive Order on Improving Critical Infrastructure Cybersecurity tasked NIST, which is embedded within the Department of Commerce, with developing and updating a cybersecurity framework for reducing cyber risks to critical infrastructure. The first iteration of that framework was released in 2014, and Versions 1.1 and 2.0 followed in 2018 and 2024. NIST guidance has also expanded to include a privacy framework, released in 2020, and an AI risk management framework, released in 2023. This year, NIST made updates to both its cybersecurity and AI risk management frameworks and created a holistic data governance model that aims to provide a comprehensive approach for entities to address issues like data quality, privacy, security, and compliance, leveraging the various NIST frameworks under a unified data governance structure to help framework users address broader organizational risks. A retrospective of these developments and predictions for 2025 are detailed in this post.Continue Reading A Very Merry NISTmas: 2024 Updates to the Cybersecurity and AI Framework

Rohan Massey and Edward Machin, partner and counsel in Ropes & Gray’s data, privacy & cybersecurity practice will be hosting a webinar on The EU AI Act – The Road to Compliance. The EU AI Act entered into force on August 1st, 2024. The Act is the first piece of comprehensive legislation to

On 12 July 2024, the EU AI Act (“AI Act”) was published in the Official Journal of the European Union. As the AI Act will enter into force 20 days from the date of its publication (1 August 2024), this starts the clock for organisations within the scope of the AI Act to prepare for compliance. 

The exact amount of time organisations have to comply with their relevant provisions under the AI Act will depend on the role they play under the AI Act, as well as the risk and capabilities of their AI systems. For example, providers[1] of general-purpose AI systems will be required to comply with the requirements of the AI Act before providers of high-risk AI systems. Continue Reading EU AI Act Published in the Official Journal of the European Union; Clock Starts for Compliance

On May 21, 2024, with a vote of 25-12, the California Senate passed SB-1446, a bill that would significantly restrict grocery and retail drug stores from providing self-checkout services and adopting new technologies. The bill, introduced on February 16 by Sen. Smallwood-Cuevas, rapidly moved through the California Senate Committee process and now has been sent over to the California Assembly for consideration. Retailers who provide self-checkout for their consumers or are looking to adopt new technologies should review the strict requirements in this bill and prepare to adjust their policies accordingly if the bill moves as swiftly through the California Assembly.Continue Reading California Legislature Looks to Restrict Self-Checkout Technology

On this episode of the R&G Tech Studio podcast, managing principal and global head of advanced E-Discovery and A.I. strategy Shannon Capone Kirk sits down with data, privacy & cybersecurity partner Fran Faircloth to discuss how new and ever-evolving technology is impacting her clients, particularly generative AI, and the challenges that arise in litigation and

The FCC has issued a declaratory ruling, employing the protection of the Telephone Consumer Protection Act (TCPA) to outlaw robocalls that use AI-generated voices. The Commission’s unanimous decision was spurred by public fallout from the doctored audio message of a purported President Biden urging voters in New Hampshire not to vote in the state’s Democratic primary last month. The announcement makes clear that the potential for malicious actors to use AI to deceive voters and subvert democratic processes is on the government’s top-of-mind this election year. This is not the first time that the TCPA has been used to protect the public from election interference, but rather than go after individual actors for individual instances of election interference as it has in the past, this decision creates a much wider blanket ban on AI-generated voices in robocalls which will cover election-related AI-generated calls among others.Continue Reading 2024 Is Set To Be Democracy and Deepfakes’ Biggest Year. Is U.S. Legislation …Ready For It?

Megan Baca moderated Ropes & Gray’s annual “From the Boardroom” panel – held in San Francisco during the 2024 J.P. Morgan Healthcare Conference – which this year looked at the role of artificial intelligence and big data in the context of dealmaking. It can feel hard to escape AI at the moment, with some debate as to whether AI is currently over-hyped or in fact at a transformational tipping point. Continue Reading Dealmaking with AI and Big Data – Charting the new frontier in life sciences