EMA and FDA present guidelines for AI in drug development
EMA and FDA define ten principles for AI use in medicines – in response to growing pressure from the industry.
(Image: PeopleImages/ Shutterstock.com)
The European Medicines Agency (EMA) and the US Food and Drug Administration (FDA) have agreed for the first time on ten common guiding principles for the use of artificial intelligence (AI) throughout the lifecycle of medicines. The principles, published in January 2026, are intended to serve as a framework for the development, approval, manufacturing, and monitoring of drugs after their market authorization.
The aim of the initiative is to harness the potential of AI for faster, more efficient, and safer drug development without compromising patient safety, efficacy, and regulatory control. The principles are aimed not only at authorities but also at pharmaceutical companies, developers, and marketing authorization holders. They are also intended to form the basis for future, more detailed guidelines in the respective legal areas.
According to the guidelines (PDF), AI systems should be “human-centric by design,” meaning they should adhere to ethical and human-centered values. Their use should also be risk-based, with a clearly defined application context, transparent documentation, a robust data basis, and continuous monitoring throughout the entire lifecycle. Further focal points include multidisciplinary expertise, adherence to existing standards, robust model and software development, and clear communication about the functionality, limitations, and risks of the AI used.
Cooperation between EU and USA
Olivér Várhelyi, EU Commissioner for Health and Food Safety, described the guidelines as “a first step in a renewed EU-US cooperation in the field of novel medical technologies.” He stated that they demonstrate how regulatory cooperation can foster innovation without endangering patient safety. In the EU, the principles are already being incorporated into ongoing work, such as the further development of pharmaceutical legislation and the implementation of the European biotech and digitalization framework.
The background is the significantly increasing use of AI in drug development – for example, in the analysis of large amounts of data, the prediction of efficacy and toxicity, or the monitoring of side effects after market authorization. Used correctly, AI can shorten development times, reduce animal testing, and better secure regulatory decisions. At the same time, the EMA and FDA emphasize that these benefits can only be realized if risks such as data bias, lack of transparency, or model failures are systematically addressed.
Videos by heise
The joint initiative builds on previous consultations between the two agencies, including a meeting in April 2024. It is part of a broader strategy to promote international standards for the responsible use of AI in healthcare and to further harmonize regulatory approaches. In parallel, the EMA is expanding its digital infrastructures to better connect research, approval, and patient information.
While many companies are already strategically using AI and hoping for an AI revolution, promising faster development, cost reductions, and competitive advantages, this dynamic, from the perspective of supervisory authorities, increases the need for clear rules. Biased or incomplete data, models that are difficult to understand, and excessive automation of decisions pose risks, especially where efficacy, side effects, and patient safety are concerned.
For an overview of approved clinical trials in the EU, the EMA has also updated its interactive “Clinical Trials Map,” which is now available in all EU official languages. The aim is to facilitate access to trial information for patients, researchers, and companies and to promote cross-border collaboration.
(mack)