regulation14 min read

Navigating AI in Drug Regulation: A New Era of FDA Guidance

Explore the FDA's groundbreaking draft guidance on AI in drug regulation, focusing on safety, effectiveness, and quality. Learn about the seven-step risk-based framework and its implications for the biopharmaceutical industry.

Navigating AI in Drug Regulation: A New Era of FDA Guidance

Introduction

On January 6, 2025, the FDA unveiled a pioneering draft guidance on the use of artificial intelligence (AI) in regulatory decision-making for drugs and biological products. This landmark document aims to enhance the efficacy and accuracy of the drug approval process, ensuring that AI-integrated applications meet stringent safety and effectiveness standards. Public comments on this new guidance are welcomed until April 7, 2025.

The draft guidance offers recommendations for building and maintaining trust in AI systems throughout the drug product lifecycle, with a focus on safety, effectiveness, and quality. The FDA developed this document with input from a diverse community, including sponsors, manufacturers, technology developers, suppliers, and academics, as well as insights from an FDA-sponsored expert workshop hosted by the Duke Margolis Institute for Health Policy in December 2022.

Key Highlights from the Guidance

While procedural, the FDA's proposed guidance is intriguing, as the intersection of AI and biopharmaceuticals promises significant changes. The guidance, though tentative, will be closely scrutinized as AI-based data production becomes more integrated into the biopharma regulatory framework.

The guidance specifically targets AI models used to produce data supporting regulatory decisions on drug safety, effectiveness, and quality. It does not cover AI models used in drug discovery or for operational efficiencies that do not impact patient safety, drug quality, or study reliability. The FDA encourages early engagement with sponsors if there is uncertainty about the applicability of this guidance.

A critical component of the guidance is its seven-step risk-based credibility assessment framework, designed to evaluate AI systems based on their context of use (COU) and regulatory impact:

  1. Define the question of interest for the AI model.
  2. Define the COU for the AI model.
  3. Assess the AI model risk.
  4. Develop a plan to establish the credibility of AI model output within the COU.
  5. Execute the plan.
  6. Document the results of the credibility assessment plan and discuss deviations.
  7. Determine the adequacy of the AI model for the COU.

This framework involves defining the question of interest and context of use, assessing AI model risk, developing and executing a credibility plan, and documenting and evaluating results for iterative improvements.

Lifecycle Management and Ongoing Challenges

The guidance emphasizes AI model lifecycle maintenance, addressing challenges like data drift, where an AI model's performance degrades over time due to new input data differences. Continuous monitoring and updating of AI models are recommended to ensure ongoing effectiveness and reliability. The guidance also tackles broader AI-related concerns:

  • Dataset quality and integrity: High-quality, representative data is crucial for reliable AI outcomes.
  • Algorithmic bias: The FDA stresses the importance of bias mitigation strategies.
  • Transparency and explainability: AI models must provide clear, understandable justifications for their outputs.

Potential Questions and Issues to Address

While the draft guidance signals the FDA's commitment to integrating technological change in biopharmaceutical regulation, challenges remain. Stakeholders must consistently apply the guidance's model risk matrix across diverse use cases, considering the limited guidance available and the rapidly evolving AI landscape. Key questions include:

  • Consistency in AI risk assessment: How will stakeholders interpret and apply the AI risk matrix across various scenarios?
  • Regulation of self-evolving AI models: The extent of post-approval oversight for adaptive AI systems is unclear.
  • Impact on smaller companies: Documentation and validation requirements may challenge startups and smaller biopharma companies.

Industry Engagement and Next Steps

The FDA positions this guidance as the start of an ongoing dialogue, encouraging early engagement to discuss AI model risk and credibility assessment plans. Various engagement options are available, including INTERACT meetings for early-stage discussions, Pre-IND meetings for investigational new drug applications, and programs for AI and digital tool integration.

Conclusion

The publication of this guidance marks the beginning of a process that requires ongoing dialogue. The FDA encourages early engagement to discuss AI models in drug and biologic products, emphasizing the importance of setting expectations for credibility assessments and addressing potential challenges. While public feedback will shape the framework's future, this guidance reflects the FDA's commitment to incorporating AI into regulatory processes while maintaining safety and reliability standards.

Summary

  • The FDA released draft guidance on AI in drug regulation, focusing on safety, effectiveness, and quality.
  • A seven-step risk-based framework evaluates AI systems based on their context of use and regulatory impact.
  • Continuous monitoring and updating of AI models are recommended to address challenges like data drift.
  • The guidance encourages early engagement with the FDA to discuss AI model risk and credibility assessment plans.
  • The FDA's commitment to integrating AI into regulatory processes is clear, with ongoing dialogue and public feedback shaping future developments.