The past year has seen unprecedented growth and development of artificial intelligence (“AI”) tools, which have been significantly propelled by the rapid deployment of generative AI (“GenAI”) tools.  The health care and life sciences industries have increasingly sought the use of AI and GenAI tools to promote innovation, efficiency and precision in the delivery of treatment and care, as well as in the production of biologics and medical devices.  For example, AI tools may more accurately predict and analyze diagnostic test results and develop personalized treatments than traditional tools; may improve clinical trial design, eligibility screening and data analysis; may be used as a diagnostic tool in a clinical trial designed to assess the safety or efficacy of a medical device; and may be used to accelerate the drug development timeline.  While such uses raise inherent concerns regarding, among other things, the improper use and/or disclosure of personal information, the introduction and/or perpetuation of bias and discrimination, as well as data security, reliability, transparency and accuracy, there is currently no developed federal or cohesive state regulatory framework designed to minimize such risks.  

In the absence of federal regulation governing the use of AI tools generally, and within the health care and life sciences industries specifically, health care industry stakeholders have tried to inform the regulatory landscape by issuing a patchwork of influential yet non-binding guidance.  

  • In January 2023, the National Institute of Standards and Technology (“NIST”) published its AI Risk Management Framework (“RMF”) pursuant to the National AI Initiative Act of 2020.  The RMF offers guidelines and best practices for managing AI-related risks to ensure ethical and transparent systems and is intended to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services and systems. 
  • In April 2023, the Coalition for Health AI published the Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare (“Blueprint”), which outlines several key elements of trustworthy use of AI tools in health care.  The Blueprint also identifies next steps to facilitate the development and use of such tools, with a particular emphasis on health system preparedness and assessment, trustworthiness and transparency throughout an AI tool’s lifecycle and integrated data infrastructure to support the discovery, evaluation and assurance of AI-enabled health care tools.
  • In May 2023, Health AI Partnership released a collection of curated guides for health care professionals using AI.  The guides set forth a legal risk framework that (1) identifies relevant laws and regulations; (2) assesses areas of potential legal risk; (3) creates an action plan for managing and mitigating risk; (4) determines responsibility for risk and mitigation; and (5) feeds into a cross-functional team.
  • In July 2023, the Health Sector Cybersecurity Coordination Center (“HC3”) published a brief on AI that describes the threat that AI-powered tools pose to the health care sector and sets forth mitigation efforts health care entities should consider to better ensure security strategies address the threats posed by AI.
  • In October 2023, President Biden issued the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (“EO”), which establishes new standards for AI safety and security.  As relevant to the health care and life sciences industries, the EO calls for the advancement of the responsible use of AI technologies in health care and the development of affordable and life-saving drugs. More specifically, the EO directs the U.S. Department of Health and Human Services (“HHS”), in consultation with relevant agencies, to (1) create an “HHS AI Task Force” to develop a strategic plan on the responsible distribution and use of AI; (2) develop a strategy to determine whether AI-enabled technologies are sufficiently high quality, including for research and discovery, drug and device safety, health care delivery and financing, and public health; (3) consider actions to promote understanding and compliance with federal non-discrimination and privacy laws as they relate to AI; (4) establish an “AI safety program” for capturing data on issues related to AI deployed in health care settings, including those caused by bias or discrimination, and to develop recommendations, best practices or other informal guidelines for appropriate stakeholders based on assessment of such data; and (5) develop a strategy for regulating the use of AI or AI-enabled tools in drug development.  The EO signals further federal oversight, guidance and possible regulation of the use of AI tools in health care and life sciences in 2024 and beyond.   

While federal regulation of the use of AI tools in health care and life sciences is elusive and evolving, some states have begun to implement their own regulatory framework.  In 2023, at least 11 states – California, Georgia, Illinois, Maine, Massachusetts, Nevada, New Jersey, North Dakota, Pennsylvania, Rhode Island, and Texas, have introduced legislation regulating the use of AI tools in health care, two of which have been enacted, while the others either remain pending or have failed/died.  The state legislation has generally been concerned with how the use of AI tools may affect independent medical judgment, promote bias and discrimination (via automated decisionmaking technologies), and impact the provision of mental health treatment (to prevent the dehumanization of health care).   The increased state legislative attention to the use of AI in health care and life sciences, coupled with the federal government’s initiatives set forth in the EO, make this topic well positioned for further state regulation in 2024.

Although there has been tremendous innovation in 2023, we are still in the early stages of AI-driven health care and life sciences.  Given the large—and growing—market share the health care and life sciences industries have in AI, there is a pressing need for concrete federal and further state regulation to ensure that the use of AI in these industries not only continues to assist with the delivery of health care and creation of medical products, but also that it does so in a way that protects individual privacy, minimizes bias and discrimination and maintains data integrity.