You are opening our English language website. You can keep reading or switch to other languages.
26.09.2024
7 min read

Healthcare AI Regulations in EU, UK, and US: Comparative Analysis

Artificial intelligence is driving major advancements in healthcare, unlocking innovations that could significantly improve patient care. To ensure compliance and patient safety, a clear understanding of the regulatory frameworks that govern the application of these technologies is crucial. This article provides a comparative analysis of the AI regulatory landscape in the European Union, the United Kingdom, and the United States.

Healthcare AI Regulations in EU, UK, and US: Comparative Analysis

AI has the potential to drive significant advancements in healthcare, but patients need protection from defective diagnoses, misuse of personal data, and biases built into algorithms. Regulating AI in healthcare is a complex challenge that requires a delicate balance between protecting patients' rights and unlocking AI’s full capabilities.

Current healthcare regulations lack the flexibility to keep pace with rapid technological advances in AI and machine learning (ML) and healthcare regulators globally are reassessing and updating their frameworks.

This article delves into the regulatory frameworks governing these technologies in the European Union (EU), United Kingdom (UK), and the United States (USA).

European Union

The European Union (EU) has been at the forefront of AI regulation, with the European Commission proposing the AI Act in April 2021. The Act came into force on August 1, 2024, and will be fully effective from August 2, 2026, with specific provisions enforced earlier. The Act classifies AI applications into four risk categories: minimal, limited, high, and unacceptable. Healthcare AI systems often fall into the "high-risk" category due to their significant impact on human health and safety.

Image

The AI Act applies to all entities that develop, use, import or distribute AI systems in the EU, regardless of where they’re based. Exemptions in healthcare are limited to AI systems used exclusively for research and scientific studies, or for individual, non-professional purposes.

The EU's AI Act covers a wide range of AI healthcare applications, such as:

Pre-clinical phase: AI modeling superseding animal testing

Clinical trials: support in the selection of patients

Regulatory submission: support in recording and analyzing data for submissions: drafting, compiling, or reviewing data to be included in the product information

Post-authorization phase: support pharmacovigilance activities including adverse event report management and signal detection.

It’s important to understand that while class I medical devices and healthcare non-medical systems—generally considered lower risk—are included under this Act, they are not subjected to the most stringent requirements. However, they must adhere to specific standards, including:

  • Clearly indicating when a human is interacting with a machine and when content is generated by an AI system to ensure transparency.
  • Providing a documented justification for the classification of the device or system.
  • Registering the company and the AI system in an EU database.

Higher-class medical devices, which undergo more rigorous conformity assessments by notified bodies (organizations designated by EU member states to assess product compliance), must meet additional requirements under the AI Act, including:

  • Using high-quality training, validation, and testing data
  • Implementing automatic logs
  • Incorporating human oversight measures directly into the system or ensuring they are implemented by users

Providers of low-risk AI systems should consider adhering to stricter guidelines for data selection, risk management, and human oversight. This recommendation arises from the European Medicines Agency's (EMA) recently released Reflection Paper on the use of artificial intelligence in the lifecycle of medicines. The Reflection Paper does not differentiate between the risk levels of AI systems and although it is not yet legally binding, it outlines standards that could become mandatory. Given the significant effort, time, and cost associated with implementing new systems, it is advisable to proactively follow the recommendations from both the Reflection Paper and the AI Act. By doing so, providers can avoid the need for future modifications or replacements to comply with upcoming legal standards.

Part of the AI Act requirements, such as those relating to quality management system, technical documentation and conformity assessments are covered by Medical Devices Regulation (MDR) and the In Vitro Diagnostic Medical Devices Regulation (IVDR) which are the default source of requirements applicable to medical devices. These regulations demand extensive conformity assessments for higher-class devices, including a detailed review of technical documentation and robust quality assurance processes before market entry.

United Kingdom

Post-Brexit, the United Kingdom (UK) has charted its own path in AI regulation. In 2021, the UK government released the National AI Strategy, outlining its approach to AI governance. The UK emphasizes a pro-innovation regulatory framework that encourages AI development while ensuring safety and efficacy. Additionally, the Medicines and Healthcare Products Regulatory Agency (MHRA) is collaborating closely with the FDA and Health Canada to establish common principles for AI in medical devices. This collaboration is evident in the recently published Transparency for Machine Learning-Enabled Medical Devices: Guiding Principles. While UK is shaping its own regulatory framework, it will most likely share common elements with future AI regulations in the United States.

Image

United States

The U.S. has a more decentralized approach to AI regulation, with multiple agencies playing a role. The Food and Drug Administration (FDA) is the primary body overseeing AI in healthcare, particularly for AI-driven medical devices and software as a medical device (SaMD).

The FDA is actively working to establish a regulatory framework for AI-driven software systems used in healthcare. This framework development involves collaboration among various departments, including the Center for Drug Evaluation and Research (CDER), the Center for Devices and Radiological Health (CDRH), and the Center for Biologics Evaluation and Research (CBER).

Currently, the FDA has not finalized a regulatory framework but has made significant strides in setting guidelines specific to AI in medical devices.

Key documents published by the FDA include Good Machine Learning Practice for Medical Device Development: Guiding Principles and Predetermined Change Control Plans for Machine Learning-Enabled Medical Devices: Guiding Principles.

These publications outline best practices and frameworks intended to guide the development and modification of AI-driven medical devices, focusing on human oversight and change control. The paper on Predetermined Change Control Plans, in particular, focuses on accommodating the iterative nature of machine learning, allowing for pre-approved modifications to AI algorithms under specific conditions.

The International Medical Device Regulators Forum (IMDRF)

The International Medical Device Regulators Forum (IMDRF) is a global coalition of medical device regulators working to harmonize and improve regulatory approaches for medical devices, including those enabled by artificial intelligence and machine learning. Their initial efforts in this field are documented in the publication Machine Learning-enabled Medical Devices: Key Terms and Definitions, which establishes a common vocabulary and foundational concepts for regulating AI-driven medical technologies. By providing clear definitions and guidance, the IMDRF aims to facilitate global collaboration and consistency in the regulatory landscape, ultimately enhancing the safety and effectiveness of AI technologies in healthcare. Given the trends in global regulatory harmonization, we may expect that IMDRF will continue to release additional guidance.

Comparative Analysis and Future Directions

Comparing the regulatory landscapes, it's clear that the EU, UK, and USA are all shaping their policies to address the rapid advancements in AI and software systems within healthcare. The EU presents a structured regulatory approach with comprehensive coverage through the AI Act and MDR/IVDR, while the UK is still defining its framework, aiming for flexibility and responsiveness to technological progress while seeking harmonization with US and Canadian regulations. In the United States, the FDA is actively developing a regulatory framework for AI in healthcare, working closely with various stakeholders to ensure patient safety while fostering innovation.

Timelines and business impact:

The EU’s AI Act entered into force on August 1, 2024, initiating a 24-month transitory period for businesses to comply. Following its publication, the European Commission is expected to issue supporting guidelines to assist businesses in achieving compliance.

Image

The status of the UK’s and US's frameworks remains fluid and undеfined, with ongoing developments expected to further define the regulatory landscape. It’s important to monitor these developments closely.

Challenges for business:

Businesses must navigate a complex and evolving regulatory environment, which could involve substantial compliance costs and adjustments to product development cycles, especially when operating in multiple markets.

Adapting to new requirements such as the use of high-quality data, implementation of automatic logs, and ensuring human oversight may require significant investment. In some cases, operational change may be necessary, such as the implementation of a quality management system for businesses developing AI systems classified as high-risk.

Key Takeaways

Protecting patients while nurturing innovation is no simple task, and the healthcare sector is already one of the most heavily regulated.

The EU, UK, and USA are all shaping their policies to address the rapid advancements in AI and software systems within healthcare. However, their progress and approaches toward establishing a regulatory framework differ. The EU adopts a prescriptive and precautionary stance with comprehensive coverage through the AI Act already in force, focusing on stringent risk assessment and compliance measures. Given the pioneering nature of the AI Act as the first legislation of its kind in the European Union, it is poised to influence future regulations not only within this region but globally.

Meanwhile, the UK is in the process of defining its framework, aiming for flexibility and responsiveness to technological progress while fostering innovation. The USA is also establishing its regulatory parameters, guided by foundational principles rather than a finalized framework.

As AI continues to mature and evolve, the importance of adaptive and responsive regulatory frameworks will become increasingly critical, necessitating continuous dialogue and collaboration among regulators, industry stakeholders, and healthcare professionals to effectively integrate these technologies into clinical settings.

Subscribe to Our Newsletter

Subscribe now to get a monthly recap of our biggest news delivered to your inbox!