On February 26, 2024, the National Institute of Standards and Technology (“NIST”) released version 2.0 of its Cybersecurity Framework (“CSF 2.0”)—the first significant update to the cybersecurity guidance since its initial publication a decade ago.[1] While the original guidance was tailored to critical infrastructure entities, the new version has a broader scope and applies to organizations of all sizes across industries, from large corporations with robust data protection infrastructure to small schools and nonprofits that may lack cybersecurity sophistication.[2] CSF 2.0 notably incorporates new sections on corporate governance responsibilities and supply chain risks; additionally, NIST has released supplemental implementation guides and reference tools that can assist organizations measure cybersecurity practices and hone data protection priorities.[3]
Continue Reading NIST Publishes Long-Awaited Cybersecurity Framework 2.0Employee Monitoring Technologies – Key Takeaways from Recent UK and EU Enforcement Decisions
Employee monitoring isn’t new, but its extent and how it has been conducted has seen significant changes in the last few decades; we have come a long way from the punch cards of the 1900s to the current use of video surveillance, e-comms monitoring and AI, among other monitoring tools.
Part of this comes from the usual progress of technology, drive to make more data-centric decisions and organisational pressures to maximise efficiency. However part of this also comes particularly as a result of the COVID-19 pandemic, as the rise of hybrid working meant that organisations had to adopt new technologies to monitor employees that were working remotely.
Recent decisions by the CNIL and ICO in the last few months highlight the flipside of using such technologies however, as the use of video surveillance and biometric data (including facial recognition) for employee monitoring purposes led to enforcement action (including a €32 million fine) in these decisions. With the EU’s AI Act looming on the horizon, the use of AI for employee monitoring purposes may also bring the obligations of the AI Act onto an organisation’s already extensive list of compliance considerations.
Click here to read our article exploring the takeaways from these decisions, as well as practical considerations for organisations when conducting employee monitoring using certain technologies.
New Executive Order Would Restrict Transfer of Certain Bulk Sensitive Personal Data and United States Government-Related Data to China and Other Countries of Concern
On February 28, 2024, President Biden announced an Executive Order directing the Department of Justice to promulgate regulations that restrict or prohibit transactions involving certain bulk sensitive personal data or United States Government-related data and countries of concern or covered persons. The DOJ’s initially identified countries are China (including Hong Kong and Macau), Russia, Iran, North Korea, Cuba, and Venezuela, and the restrictions would also apply to any entity owned by, controlled by, or subject to the jurisdiction or direction of a country of concern as well as any person “knowingly causing or directing, directly or indirectly, a violation” of the regulations.
Click here to read Ropes & Gray’s Client Alert detailing the new EO.
DoorDash and California Attorney General Reach Settlement Over Privacy Allegations
Following up on announcements of sweeps from late January, last week California Attorney General Rob Bonta announced a settlement with the popular food delivery service DoorDash related to allegations that DoorDash breached the California Consumer Privacy Act (CCPA) and the California Online Privacy Protection Act (CalOPPA). The announcement doubles down on the Attorney General’s reiteration that privacy will continue to be priority for his office, while the new California Privacy Protection Agency (CPPA) is getting up to speed.
Continue Reading DoorDash and California Attorney General Reach Settlement Over Privacy AllegationsCalifornia Court of Appeal Restores CPPA Authority to Enforce Privacy Regulations
On February 9, 2024, a California state court of appeal unanimously vacated a lower court ruling, green-lighting the California Privacy Protection Agency’s authority to commence enforcement of the Agency’s first set of regulations. Until now, the Agency’s authority to enforce regulations it has promulgated under the California Consumer Privacy Act (“CCPA”) has been delayed. The Agency had been poised to begin enforcing its latest batch of completed privacy regulations on July 1, 2023, but a trial court’s ruling put this work on hold until March 29, 2024. That hold has now evaporated, and so the Agency can commence enforcement activities with immediate effect. The decision also impacts future Agency rulemaking such as the Agency’s draft regulations on cybersecurity audits, privacy impact assessments, and automated decision-making, which will no longer be subject to the 12-month stay of enforcement.
Continue Reading California Court of Appeal Restores CPPA Authority to Enforce Privacy Regulations2024 Is Set To Be Democracy and Deepfakes’ Biggest Year. Is U.S. Legislation …Ready For It?
The FCC has issued a declaratory ruling, employing the protection of the Telephone Consumer Protection Act (TCPA) to outlaw robocalls that use AI-generated voices. The Commission’s unanimous decision was spurred by public fallout from the doctored audio message of a purported President Biden urging voters in New Hampshire not to vote in the state’s Democratic primary last month. The announcement makes clear that the potential for malicious actors to use AI to deceive voters and subvert democratic processes is on the government’s top-of-mind this election year. This is not the first time that the TCPA has been used to protect the public from election interference, but rather than go after individual actors for individual instances of election interference as it has in the past, this decision creates a much wider blanket ban on AI-generated voices in robocalls which will cover election-related AI-generated calls among others.
Continue Reading 2024 Is Set To Be Democracy and Deepfakes’ Biggest Year. Is U.S. Legislation …Ready For It?The Data Day: Protecting Your Company and Your Data in the Wake of a Cyber Incident
Tune in to Ropes & Gray’s podcast series, The Data Day, brought to you by the firm’s data, privacy & cybersecurity practice. This series focuses on the day-to-day effects that data has on all of our lives as well as other exciting and interesting legal and regulatory developments in the world of data, and features a range of guests, including clients, regulators and colleagues. On this special episode, in honor of World Data Privacy Day coming up on January 28, hosts Fran Faircloth, a partner in Ropes & Gray’s Washington, D.C. office, and Edward Machin, counsel in the London office, discuss the most important steps they advise clients to take to protect their business and their data from a cybersecurity attack.
States Move Forward with Automated Privacy Opt-Out Signals; Colorado Approves First Universal Opt-Out Mechanism
States have recently taken important steps toward implementing so-called “Universal Opt-Out Mechanisms” (“UOOMs”), which will provide consumers with a method for automatically exercising privacy rights. UOOMs, sometimes referred to as opt-out preference signals, are user enabled features, typically within the user’s browser or through a browser add-on, that send a signal to each website the user visits to communicate the user’s preference to opt-out of certain target advertising (and potentially other uses of data discussed below). Several states have adopted a requirement to honor UOOMs as part of their “comprehensive” privacy law. New Jersey, which has recently enacted a comprehensive privacy law, includes an UOOMs requirement that, unique among state legislation, would extend the right to opt-out through UOOMs to include opting out of the use of automated decisionmaking technologies. Businesses may struggle to implement technical solutions for responding to UOOMs, particularly if the specifications for UOOMs vary between states. Businesses should work with their IT teams or website providers to ensure they have developed solutions to comply, if they have not done so already.
Continue Reading States Move Forward with Automated Privacy Opt-Out Signals; Colorado Approves First Universal Opt-Out MechanismMerck Insurance Settlement Leaves Debate over Cyberwar and Cyberinsurance Unsettled
Merck’s settlement last week over its $1.4 billion claim tied to a 2017 Russian-linked “NotPetya” cyberattack leaves a major question in cybersecurity and international law anything but settled – can a “cyberattack” ever be considered an “attack” under the international laws of war? The insurance dispute is hardly the first time cybersecurity has been linked to nation-state security – as far back as 2014, China’s now President Xi Jinping declared that “without cybersecurity there is no national security” – but how did a major pharmaceutical chain’s insurance claim become a potential battleground for litigating the definition of war in the 21st century?
Continue Reading Merck Insurance Settlement Leaves Debate over Cyberwar and Cyberinsurance UnsettledDealmaking with AI and Big Data – Charting the new frontier in life sciences
Megan Baca moderated Ropes & Gray’s annual “From the Boardroom” panel – held in San Francisco during the 2024 J.P. Morgan Healthcare Conference – which this year looked at the role of artificial intelligence and big data in the context of dealmaking. It can feel hard to escape AI at the moment, with some debate as to whether AI is currently over-hyped or in fact at a transformational tipping point.
Continue Reading Dealmaking with AI and Big Data – Charting the new frontier in life sciences