The European Federation of Pharmaceutical Industries and Associations (EFPIA) believes in the potential of applying artificial intelligence (AI) to deliver benefit for patients, life sciences companies, and society. AI will have an ever more critical role in the research, development, and manufacturing of medicinal products, meaning we can discover, develop, and deliver new, safer, more effective treatments to patients faster than ever.
It is critical that the regulatory frameworks governing the use of AI in research, development, and manufacturing must be fit-for-purpose, risk-based, non-duplicative, globally aligned, and adequately tailored. This would ensure that rules enable, rather than hinder, the development of safe and effective treatments that reach patients faster and more efficiently.
Drug development may be facilitated by the use of a range of methodologies and drug development tools including those incorporating AI. When used solely for the purpose of medicinal R&D, such AI-enabled tools are exempt from the requirements of the EU AI Act. In the event that this exemption does not apply, the majority of these tools would in any case not be considered high risk under the AI act, and thus not subject to CE marking.
The development and authorisation of safe and effective medicines
is already governed by a well-established regulatory framework, which includes laws, guidance documents, and other policies.
The EFPIA supports the European Medicines Regulatory Network’s (EMRN) considered approach to AI, which builds on existing methods, good research practices, and requirements applied to other drug development tools (such as traditional statistical methods and approaches including model-informed drug development). The EFPIA looks forward to collaborating with the EMRN on upcoming guidance for the use of AI in the development of medicines.
The EFPIA believes the following five considerations are critical for the use and governance of AI across the drug development lifecycle:
1. The EU AI Act exemption for AI dedicated to scientific research
The EU AI Act supports innovation and freedom of science, and should not undermine research and development activity. This is why AI systems and models specifically developed and put into service for the sole purpose of scientific research and development are excluded from its scope (as described in Recital 25, Articles 2.6 and 2.8). EFPIA considers that this exemption applies to AI-based drug development tools used in the research and development of medicines because the sole use of these tools is in the R&D of drug development.
2. The majority of AI uses in the development of medicines cannot qualify as high-risk AI under the current EU AI Act
If the exemption were not to apply, it is important to note that the majority of uses of AI in medicine research and development typically involve AI-enabled software that is not regulated under any of the legal frameworks outlined in Annex I (including those for medical devices) nor are they featured under Annex III high-risk uses. Therefore, they cannot legally qualify as high risk under the AI Act.
3. Medicines development is already a highly regulated space in Europe
Medicines development in Europe is a highly regulated space which ensures the development and approval of safe and effective medicines, including many which employ innovative technologies. We believe that these existing EU legal frameworks, in addition to other regulatory frameworks and policies for medicines, set standards to ensure a high level of public health protection. They facilitate access to medicines and their use, while at the same time encouraging innovation. They are sufficiently flexible to create the right foundation to include AI uses in the development of medicines.
4. Upcoming EMA guidance on the use of AI in medicines development lifecycle will provide a new layer of AI oversight to complement the existing regulatory and legislative landscape for medicines
The EFPIA welcomes the European Medicines Agency’s (EMA) proactive, risk-based approach to assessing the use of AI in medicines through its consultation on a reflection paper on AI and multi-year AI workplan. This includes plans to draft guidance on the use of AI in the medicine lifecycle in 2024. We believe that this upcoming AI guidance, which factors in potential risks associated with use of AI, in conjunction with established, well-functioning legislative and regulatory frameworks for medicines, will ensure appropriate governance of the use of AI in the development of medicines.
5. The ultimate goal for governance of AI should be fit-for-purpose, risk-based guidance for oversight which is calibrated to the regulatory status and context of use
Traditional policy instruments, such as legislation and guidance, may struggle to keep pace with the rapid advancements in highly innovative technologies like AI. For this reason, the pharmaceutical industry needs dynamic, flexible, and future-proof guidance which takes into account the specifics of intended uses and context, and includes appropriate human oversight. Distinctions must be made based on the role the AI plays, the stage of development it is utilised in, the impact on the benefit-risk evaluation of a medicine or associated regulatory decision-making, as well as the level of human oversight and control over decision making processes.
The AI policy landscape is evolving in Europe, including the finalisation of the EU AI Act, and the work of the EMRN, and is adapting to the increasing use of AI-based drug development tools
by developing guidance and provisions for oversight. The EFPIA members look forward to continuing to work together with the European Commission, the EMA, the broader ERMN, patient groups and other stakeholders in the healthcare space to ensure we unlock the potential of AI, while ensuring its adherence to fundamental rights, safety, and ethical principles at the same time.