The FDA’s role in regulating AI-based medical devices has evolved significantly since the agency first approved such a device in 1995. With over 1,000 AI-based medical devices now approved, the number of submissions for these technologies has surged, reflecting a ten-fold increase since 2020. This rapid growth underscores the increasing integration of AI in healthcare.
To address the complexities of AI in medical products, the FDA has ramped up its focus on AI in recent years. In 2022, the agency released a final guidance document on clinical decision support (CDS) software, which included provisions for AI-based applications. While this document aimed to clarify regulatory expectations, it also sparked debates within the industry over its implications for innovation and compliance.
In March 2024, the FDA further advanced its regulatory framework by publishing a paper detailing how the agency and its centers are working to harmonize their approaches to regulating AI in medical products. This effort is crucial in ensuring that AI technologies meet safety and efficacy standards while fostering innovation to improve patient outcomes.
These developments highlight the FDA’s commitment to balancing the potential of AI with robust oversight to protect public health.
In a special communication published in JAMA, Haider Warraich and colleagues from the FDA emphasized the need for ongoing efforts to ensure that AI technologies in biomedicine and healthcare are evaluated rigorously in the real-world settings where they are deployed. They stated,
“Historic advances in AI applied to biomedicine and health care must be matched by continuous complementary efforts to better understand how AI performs in the settings in which it is deployed.”
This perspective highlights that the responsibility for ensuring the success of AI in healthcare goes beyond just the FDA. Warraich and his co-authors advocate for a comprehensive approach that integrates the entire consumer and healthcare ecosystems to adapt to the rapid pace of AI innovation.
Such a broad-based strategy is critical for addressing the complexities of AI performance in diverse clinical environments, ensuring that AI systems are not only effective but also equitable and safe. This vision underscores the importance of collaboration between regulators, healthcare providers, technologists, and consumers to fully realize AI’s transformative potential in healthcare.
FDA officials have highlighted several key areas requiring greater focus and understanding to effectively regulate and integrate AI technologies in healthcare. These priorities aim to address the rapidly evolving landscape of AI and ensure its responsible and impactful use. The outlined areas include:
- Global Regulation: Establishing harmonized frameworks across nations to address the global nature of AI development and deployment.
- Keeping Pace with AI Changes: Continuously updating regulatory standards to reflect the fast-paced advancements in AI technologies.
- Flexible Approaches: Implementing adaptable regulatory practices that account for the iterative nature of AI, particularly in machine learning models.
- Use in Medical Product Development: Encouraging the integration of AI into the development lifecycle of medical products to improve efficiency and innovation.
- Preparing for Unknowns: Building systems and policies that can address unforeseen challenges posed by future AI capabilities.
- Life Cycle Management: Monitoring AI-based devices throughout their entire lifecycle, from initial deployment to updates and maintenance.
- Responsibilities of Regulated Industries: Ensuring that companies deploying AI systems adhere to clear, enforceable standards for safety, efficacy, and accountability.
- Robust Supply Chains: Supporting stable and secure supply chains to mitigate risks associated with disruptions, particularly for AI-reliant medical technologies.
- Incorporating Ideas from Startups and Academia: Leveraging the innovative contributions of startups and academic research to drive forward-thinking solutions in AI.
- Prioritizing Health Outcomes Over Financial Returns: Shifting the focus of AI development and deployment from profit motives to delivering measurable improvements in patient health and well-being.
FDA officials cautioned that without sustained efforts to address the challenges of AI in healthcare, the technology risks falling short of its transformative potential. They explained that, like other general-purpose technologies in healthcare, AI could fail to meet expectations or, worse, cause significant harm. This risk is particularly pronounced if AI models are not continuously monitored, as untended models may experience performance degradation over time.
Additionally, the focus on financial returns rather than clinical outcomes could divert attention from patient care, leading to suboptimal or even detrimental impacts on health. This highlights the critical need for ongoing vigilance, robust lifecycle management, and a commitment to aligning AI development with healthcare priorities to ensure patient safety and efficacy.
The regulation of AI in healthcare requires alignment with global standards, as the interconnected nature of technology spans international boundaries. The FDA is actively participating in global initiatives, including working groups within the International Medical Device Regulators Forum (IMDRF) and the International Council for Harmonisation (ICH). These collaborations aim to establish harmonized regulatory approaches to address the unique challenges of AI in healthcare.
The authors emphasized the need for a flexible, risk-based regulatory approach to address the diversity of AI models used in healthcare. These models are integrated into medical devices with varying degrees of FDA involvement, requiring tailored oversight. To keep pace with AI’s rapid evolution, the FDA supports an adaptive, science-based regulatory framework through initiatives like the Software Precertification Pilot Program and the total product life cycle approach for medical devices.
However, the sheer volume of AI-driven changes means that industry stakeholders must enhance their assessment and quality management processes. The regulatory ecosystem must extend beyond the FDA to include collaborative efforts across the entire healthcare and technology landscape.
AI holds transformative potential in drug development and clinical research, as well as in premarket evaluation, postmarket surveillance, and real-world evidence analysis. By analyzing vast amounts of data, AI can detect patterns and anomalies, improving the ability to:
- Identify safety issues or unexpected benefits.
- Address performance inefficiencies in real time.
- Provide a comprehensive life cycle analysis of medical products, synthesizing data from clinical trials, postmarket activities, and patient feedback.
This proactive approach could lead to faster identification of adverse events, enabling timelier interventions and enhancing patient safety.
One of the unique challenges in AI regulation is the oversight of generative AI models, such as large language models (LLMs). These technologies introduce unknown risks that require specialized tools and methods for evaluation. The authors stressed that AI’s performance must be assessed in the specific environments where it is deployed. This could necessitate an information ecosystem similar to intensive care unit monitoring, ensuring real-time evaluation and adaptability.
The evolving nature of AI presents a regulatory dilemma. Unlike traditional medical devices that remain consistent throughout their lifecycle, AI-enabled products change dynamically as they learn and adapt. This variability requires recurrent evaluation of AI models, a task that may exceed the capacity of current regulatory frameworks.
The authors concluded that regulatory success depends on a system-wide commitment involving regulators, the healthcare industry, academia, and external stakeholders. By adopting innovative tools and frameworks, ensuring cross-sector collaboration, and maintaining a focus on patient-centered outcomes, the healthcare ecosystem can address the challenges and opportunities posed by AI-enabled technologies.
AI-based medical products hold significant potential for addressing shortages in areas like generic drugs and low-cost medical devices by enabling anticipatory planning and quick response mechanisms. However, the reliability of AI models is critical; their vulnerability to technology outages poses a significant risk. The authors stress that minimizing these vulnerabilities must be a foundational element of AI technology design and implementation.
The task of ensuring AI safety and effectiveness across its total product lifecycle is complex, particularly in diverse healthcare settings. The authors highlight the need to balance support for newcomers, such as startups, entrepreneurs, and academic institutions, who bring fresh ideas, with the resources and expertise of large technology companies. This balance is vital to fostering innovation while ensuring that all AI products meet rigorous safety and performance standards.
The FDA recognizes the conflict between maximizing financial returns on AI technologies and prioritizing patient health outcomes. While AI offers solutions for underserved areas—such as health care deserts and regions with primary care shortages—it may disrupt traditional revenue streams and employment models. The authors emphasize that many preventive services highlighted by AI are currently not profitable, underscoring the need for an intentional focus on health outcomes over financial gains.
This shift will require collaboration across sectors to address challenges such as workforce disruptions while ensuring AI’s positive societal impact.
Strong oversight by the FDA and other regulatory bodies remains essential to maintaining public trust in AI-enabled medical products. The authors stress that responsible advancement of AI technologies requires:
- Identifying and addressing irresponsible actors in the industry.
- Avoiding misleading hyperbole that overpromises AI’s capabilities.
- Developing tools for ongoing evaluation of AI’s safety and effectiveness.
While the FDA will play a central role, collaboration across regulated industries, academia, and healthcare organizations is critical to ensuring that AI achieves its transformative potential responsibly.
The evolution of AI in healthcare demands careful attention and rigorous efforts across all sectors. By prioritizing health outcomes, fostering innovation, and maintaining stringent oversight, stakeholders can navigate the complexities of AI adoption. Success will hinge on a broad coalition of expertise and resources, creating a framework that upholds safety, equity, and trust while driving meaningful improvements in patient care.