Artificial Intelligence

The FCC has issued a declaratory ruling, employing the protection of the Telephone Consumer Protection Act (TCPA) to outlaw robocalls that use AI-generated voices. The Commission’s unanimous decision was spurred by public fallout from the doctored audio message of a purported President Biden urging voters in New Hampshire not to vote in the state’s Democratic primary last month. The announcement makes clear that the potential for malicious actors to use AI to deceive voters and subvert democratic processes is on the government’s top-of-mind this election year. This is not the first time that the TCPA has been used to protect the public from election interference, but rather than go after individual actors for individual instances of election interference as it has in the past, this decision creates a much wider blanket ban on AI-generated voices in robocalls which will cover election-related AI-generated calls among others.Continue Reading 2024 Is Set To Be Democracy and Deepfakes’ Biggest Year. Is U.S. Legislation …Ready For It?

Megan Baca moderated Ropes & Gray’s annual “From the Boardroom” panel – held in San Francisco during the 2024 J.P. Morgan Healthcare Conference – which this year looked at the role of artificial intelligence and big data in the context of dealmaking. It can feel hard to escape AI at the moment, with some debate as to whether AI is currently over-hyped or in fact at a transformational tipping point. Continue Reading Dealmaking with AI and Big Data – Charting the new frontier in life sciences

In a Law360 article, IP transactions and technology partner Regina Sam Penti, IP transactions counsel Georgina Jones Suzuki and IP transactions associate Derek Mubiru analyzed the recent trend of artificial intelligence (AI) providers offering indemnity shields and urged businesses to exercise caution in relying on these indemnities.

In response to a number of

In a Law360 article, co-authored by data, privacy & cybersecurity partner Fran Faircloth and associate May Yang, the team reflect on 2023 Global AI highlights, noting “2023 stands out as a landmark year for artificial intelligence and for generative AI in particular.”

“The launch of OpenAI’s ChatGPT in late 2022 marked a turning point, igniting a global race among tech companies and investors to harness and evolve this burgeoning technology,” said Fran and May. This development brings a myriad of legal implications, touching on intellectual property challenges, data privacy and cybersecurity risks, and ethical considerations in AI Deployment.Continue Reading Reviewing 2023’s Global AI Landscape Across Practice Areas

2023 was the year of artificial intelligence — and 2024 is already shaping up to be more (much more) of the same.  The European Union’s legislative bodies passed the AI Act earlier this month, and although the text has yet to be finalised on the world’s first comprehensive AI law, the hype around it already feels unstoppable.  That hype will turn into hard work over the next 12 months, as organisations grapple with understanding their obligations under the Act and putting in a governance framework that meets those obligations.  Needless to say, it will not be an easy task.Continue Reading The Three European Union Laws That Need Your Attention in 2024

The past year has seen unprecedented growth and development of artificial intelligence (“AI”) tools, which have been significantly propelled by the rapid deployment of generative AI (“GenAI”) tools.  The health care and life sciences industries have increasingly sought the use of AI and GenAI tools to promote innovation, efficiency and precision in the delivery of treatment and care, as well as in the production of biologics and medical devices.  For example, AI tools may more accurately predict and analyze diagnostic test results and develop personalized treatments than traditional tools; may improve clinical trial design, eligibility screening and data analysis; may be used as a diagnostic tool in a clinical trial designed to assess the safety or efficacy of a medical device; and may be used to accelerate the drug development timeline.  While such uses raise inherent concerns regarding, among other things, the improper use and/or disclosure of personal information, the introduction and/or perpetuation of bias and discrimination, as well as data security, reliability, transparency and accuracy, there is currently no developed federal or cohesive state regulatory framework designed to minimize such risks.  Continue Reading The 2023 AI Boom Calls for Further Regulation of the Use of AI Tools in the Health Care and Life Sciences Industries

Earlier this year, the UK government released an AI white paper outlining its light-touch, pro-business proposal to AI regulation. Eight months on, and the UK appears to be sticking firm with this approach, with Jonathan Camrose (UK First Minister for AI and Intellectual Property) stating in a speech on 16 November 2023 that there will be no UK law on AI ‘in the short term’.

This stance has been taken in spite of the developments being made around the world in this area. The EU for example, by contrast, continues to make significant steps towards finalization and implementation of its landmark AI Act, with policy-makers announcing that they had come to a final agreement on the Act on 8 December 2023. Progress has also been made across the pond with President Biden issuing the executive order on Safe, Secure and Trustworthy Artificial Intelligence on 30 October 2023, with the intention of cementing the US as a world leader in the field. The UK’s reluctance to regulate in this area has been criticised by some as not addressing consumer concerns – but will this approach continue into 2024?Continue Reading AI Regulation in 2024 – Will The UK Continue to Remain The Outlier?

On October 30, 2023, President Biden issued an executive order (“EO”) on the safe, secure, and trustworthy development and deployment of artificial intelligence (“AI”) that has the potential to set far-reaching standards governing the use and development of AI across industries. Although the EO does not directly regulate private industry, apart from certain large-scale models

On this episode of the R&G Tech Studio, mergers & acquisitions partner Sarah Young sits down with data, privacy & cybersecurity partner Fran Faircloth to discuss how she advises clients on all aspects of corporate strategy, and whether she thinks artificial intelligence and machine learning will impact her clients in the months and years

The UK Information Commissioner (ICO) was reportedly set to sound a note of caution recently, at Politico’s Global Tech Day, regarding the potential privacy risks that can arise in the context of generative artificial intelligence (AI).  

Privacy risks of generative AI

While acknowledging the potentially significant advantages and benefits that generative AI can bring, both to organisations and society more generally, the ICO’s Exec Director of Regulatory Risk, Stephen Almond, was expected to reiterate to businesses the need to consider the potential data protection issues around generative AI, noting that ensuring the compliance of such technologies with applicable data protection laws needs to be robustly scrutinised.Continue Reading UK Information Commissioner Warns of Privacy Risks Around Generative AI