EU regulation of artificial intelligence in healthcare

November 2023  |  SPECIAL REPORT: HEALTHCARE & LIFE SCIENCES SECTOR

Financier Worldwide Magazine

November 2023 Issue


Artificial intelligence (AI) has increasingly been introduced in numerous areas, including healthcare. It has revolutionised healthcare not only through AI-supported medical devices (e.g., AI-powered imaging systems for medical diagnostics such as AI algorithms to analyse X-rays, CT scans and MRIs or patient-monitoring wearables) but also in drug discovery, clinical trials (e.g., transformation or analysis of clinical data and subject selection), manufacturing (e.g., process design and scale up, in-process quality control, batch release), post-authorisation studies and pharmacovigilance (e.g., adverse event report management, signal detection).

The increasing use of AI requires adequate regulation to protect against or mitigate the risks it carries. This, however, is challenging in healthcare because it is one of the most regulated sectors in the European Union (EU), so that the regulation of AI will have to compromise with the existing sets of rules or impede the development of AI-related health products.

Currently, many EU initiatives are ongoing that will impact the use of AI in healthcare, such as the forthcoming EU Artificial Intelligence Act (AI Act), the European Medicines Agency’s (EMA’s) reflection paper for the use of AI in the medicinal products lifecycle, the revision of the EU general pharmaceutical legislation, the EU Coordinated Plan on Artificial Intelligence and the EU Cybersecurity Strategy.

Artificial Intelligence Act

In April 2021, the European Commission (EC) released a legislative proposal for an EU AI Act, a first of its kind regulation intended to establish harmonised AI rules in the EU. The legislative proposal is in the last phase of the EU ordinary legislative procedure: the three EU legislators (the EC, European Parliament and the Council of Ministers) started negotiations to agree on the final text.

Considering the heated debates on a few issues (e.g., certain definitions, criteria for high risk classification and fundamental rights impact assessments), long negotiations and significant changes to the current draft are expected. The goal is to adopt the final text before the EU elections in May 2024, but a later adoption cannot be excluded. Following the adoption of the AI Act, a two- or three-year transition period will apply; hence, non-compliant products will not be excluded from the EU market before mid-2026 at the earliest.

The AI Act defines mandatory requirements applicable to the design and development of all AI systems. AI systems are classified into four risk categories: high risk, limited risk, minimal risk and unacceptable risk and regulated depending on the risk they carry. High risk AI systems will have to bear a CE marking to indicate their conformity with the AI Act and move freely within the EU market. Limited risk AI systems will be subject to transparency requirements. Minimal risk AI systems will generally remain unregulated. And unacceptable risk AI systems will generally be prohibited. The AI Act also regulates market surveillance and proposes the creation of a new European AI agency.

Future legislation will apply to all sectors and thus to medicinal products and medical devices. The AI Act will not trigger more issues for the pharmaceutical sector than for any other sector as the use of AI in the lifecycle of medicinal products is regulated by other, more specific legal frameworks. On the other hand, it could have a significant impact on manufacturers of certain medical devices.

Indeed, on the one hand, the AI Act will apply in addition to other applicable EU legislations, i.e., the Medical Devices Regulation (MDR) for medical devices and the In-Vitro Diagnostics Regulation (IVDR) for in-vitro diagnostic medical devices. On the other hand, AI medical devices that require a notified body conformity assessment are considered high risk AI systems under the AI Act. As a result, software medical devices and medical devices including an AI system as component (collectively, software-related devices) will be subject to two sets of rules: the MDR/IVDR and the AI Act.

This concomitant application of two legislations is per se not an issue unless the AI rules either create duplication of tasks or obligations for manufacturers or conflict with the MDR rules and no conflict mechanism resolves the conflict by determining which set of rules prevails over the other as is the case under the current legislative proposal. The issue is aggravated by the fact that the ‘approval’ system set up by the AI Act for high risk AI systems is similar to the ‘approval’ system set up by the MDR for software-related devices.

Hopefully, EU legislators will resolve the conflicting overlaps between the MDR/IVDR and the AI Act or add a conflict mechanism to the AI Act during the last round of negotiations. Otherwise, the new AI Act will impede the development and marketing of software-related devices in the EU.

EMA reflection paper on AI and medicinal products

On 13 July 2023, the EMA published a draft reflection paper on the use of AI in the lifecycle of medicines (Reflection Paper). A public consultation is open until 31 December 2023.

The Reflection Paper maps the key issues associated with the use of AI in the medicinal product lifecycle, and stresses the new risks to patient safety and data integrity which are entailed by the use of massive data processed by trained systems. The level of risk may differ depending on several elements, such as the type of AI technology used, the context of the use, or the degree of influence exercised by the technology. This warrants a detailed risk management analysis by the marketing authorisation applicant or holder since it is responsible for AI compliance – “algorithms, models, datasets, and data processing pipelines used [must be] fit for purpose and in line with ethical, technical, scientific, and regulatory standards as described in GxP standards and current EMA scientific guidelines”.

The Reflection Paper also outlines where AI can be involved through the lifecycle of a medicinal product and thereby accepts that it can be present at every stage, from drug discovery and clinical trials to product information and post-authorisation phase. As for the technical aspects of the use of AI, the Reflection Paper focuses on the method of data acquisition, use of data, and algorithm model.

One of the main challenges with the use of AI models is the risk of integration of human bias. Thus, a balanced training dataset should be created that gives consideration to elements such as oversampling rare populations and takes all relevant bases of discrimination into account. In any event, the data sources and data acquisition process should be documented and traceable.

The Reflection Paper also addresses the issues of governance, data privacy, data integrity and ethical aspects of AI used in healthcare, and recommends using closely monitored transparent models. For instance, if personal data are used for model training, the risk of potential extraction should be determined and the risk of reidentification should be mitigated. It is the applicant’s duty to ensure that personal data and AI models are stored and processed in accordance with EU data privacy legislation in order to ensure lawfulness, fairness, transparency, purpose limitation, accuracy and accountability.

Other EU legislation

The legislative proposals for the revision of the EU general pharmaceutical legislation make some direct or indirect references to the use of AI. The most important references concern the use of AI and real-world evidence for: (i) regulatory decision making on the development, authorisation and supervision of medicinal products; (ii) pharmacovigilance and post market monitoring (as mentioned in the Reflection Paper, incremental learning can continuously enhance models for classification and reporting of issues arising with medicinal products); and (iii) the introduction of the so-called ‘regulatory sandbox’ as an alternative experimental regulatory pathway for innovative technologies, products, services or approaches.

Manufacturers using AI must also take other relevant EU legislations and guidelines into consideration, such as the EU Clinical Trials Regulation and guidelines on clinical trials, when AI is used for selection of clinical trial participants or analysis of clinical trial data. For instance, and as indicated in the EMA’s Reflection Paper, “when AI models are used for transformation or analysis of data within a clinical trial of a medicinal product, they are considered a part of the statistical analysis and should follow applicable guidelines on statistical principles for clinical trials and include analysis of the impact on downstream statistical inference”.

Also, during late-stage pivotal clinical trials, risk of overfitting and data leakage must be mitigated by testing the AI model with prospectively generated data acquired in a setting or population representative of the intended context of use. At this stage, incremental learning approaches are not accepted, and regulators must be consulted in case of any AI model modification.

The EU General Data Protection Regulation (GDPR) applies as well. It regulates the processing of personal data in the EU and imposes stricter requirements when ‘special’ categories of personal data such as health data are processed.

Conclusion

The rapidly growing use of AI has already revolutionised the healthcare sector, providing huge opportunities for life sciences industries and great promises for patients. The life sciences sector, however, is highly regulated, and AI should be regulated taking into account the legal and regulatory principles and best practices already applicable to medicinal products and medical devices. Otherwise, the regulation of AI would deter rather than facilitate the use of AI in healthcare to the detriment of EU patients.

 

Geneviève Michaux is a partner and Georgios Symeonidis is an associate at King & Spalding. Ms Michaux can be contacted on +32 (2) 898 0202 or by email: gmichaux@kslaw.com. Mr Symeonidis can be contacted on +32 (2) 898 0215 or by email: gsymeonidis@kslaw.com.

© Financier Worldwide


©2001-2024 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.