The pharmaceutical industry stands at a pivotal moment in regulatory science. As artificial intelligence transforms drug discovery and development, regulatory agencies worldwide are grappling with how to harness these powerful technologies while maintaining the rigorous safety and efficacy standards that protect public health. The U.S. Food and Drug Administration (FDA) has taken a decisive step forward in this evolution.
In January 2025, the FDA issued a draft guidance titled “Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products.” This landmark document marks the agency’s first official recommendation on the use of AI in drug development, signaling a new era of regulatory modernization that could reshape how medicines reach patients.
A Risk-Based Framework for AI in Regulatory Decision-Making
The FDA’s draft guidance provides a risk-based credibility assessment framework for evaluating AI models intended to support regulatory decisions regarding the safety, effectiveness, or quality of drugs and biological products. This framework emphasizes the importance of defining the context of use (COU) for AI applications and tailoring the level of scrutiny based on the potential impact of the AI model on regulatory decisions.
The guidance outlines a seven-step process for ensuring AI models are suitable for regulatory decision-making:
- Define the question the AI is addressing
- Establish the context of use (COU)
- Assess the AI model’s risk level
- Develop a credibility assessment plan
- Execute the plan with necessary validation studies
- Document the findings
- Engage with the FDA early in the development process
This structured approach ensures that AI models influencing high-stakes decisions undergo rigorous validation, while those with lower impact are assessed with appropriate flexibility.
Elsa: The FDA’s Internal AI Tool
In parallel with the draft guidance, the FDA has launched a generative AI tool named “Elsa” to enhance efficiency in its operations, particularly in scientific reviews. Elsa assists in summarizing adverse events, reviewing clinical protocols, generating database code, and expediting scientific evaluations. Developed within Amazon Web Services’ GovCloud, Elsa ensures secure handling of sensitive government data.
The deployment of Elsa reflects the FDA’s commitment to integrating AI technologies to streamline regulatory processes and improve decision-making efficiency.
Industry Implications and Future Outlook
The FDA’s draft guidance and the implementation of tools like Elsa signal a significant shift toward embracing AI in regulatory science. By providing a clear framework for the use of AI in regulatory decision-making, the FDA aims to foster innovation while maintaining rigorous standards for safety and efficacy.
Stakeholders in the pharmaceutical and biotechnology industries are encouraged to engage with the FDA early in the development of AI models intended for regulatory use. This proactive collaboration can facilitate the integration of AI technologies into drug development pipelines, potentially accelerating the availability of safe and effective therapies to patients.
As the FDA continues to refine its approach to AI integration, ongoing dialogue with industry, academia, and other stakeholders will be crucial in shaping a regulatory environment that balances innovation with public health protection.