FDA Seeks Public Input on Evaluating AI-Enabled Medical Devices
On September 30th, 2025, the U.S. Food and Drug Administration (FDA) issued a new Request for Comments (RFC) seeking public input on how to effectively measure and evaluate AI-enabled medical devices across their total product lifecycle. This marks a key step in FDA’s efforts to modernize its regulatory frameworks to keep pace with rapidly evolving AI technologies in healthcare.
This RFC builds on the FDA’s longstanding work through its Digital Health Center of Excellence (DHCoE) and its broader framework for Good Machine Learning Practice (GMLP). As AI-powered devices become more adaptive, autonomous, and complex, the FDA is looking to the public; especially medical device manufacturers, health systems, patients, and AI developers for real-world insights on how to evaluate safety, effectiveness, and performance over time.
This initiative reflects a growing trend among federal agencies to move from static regulatory reviews to dynamic, lifecycle-based oversight for AI technologies. It also complements other cross-agency efforts under the Administration’s AI Action Plan and Executive Order 14179, which emphasize conditional approvals, safe harbors, and modernized testing approaches for AI use in high-risk sectors like healthcare.
Public Comments are due by December 4, 2025.
Why This Matters?
Unlike traditional medical devices that are relatively static, AI-enabled devices often learn and evolve in real-time, adapting to new data and contexts. FDA’s legacy review processes, geared toward fixed-function tools, are not well-suited to evaluate continuously updating algorithms that power, for example, clinical decision support systems or imaging diagnostics.
As FDA notes, “traditional approaches to performance evaluation and monitoring may not fully apply to AI-enabled medical devices.” Without modernized tools for measurement, developers and regulators alike face uncertainty in how to validate these systems across their lifecycle from development to deployment, retraining, and beyond.
FDA RFC | COMMENTS DUE DECEMBER 4, 2025
A golden opportunity to help set the ground rules for adaptive, AI-enabled medical technologies.
For healthcare companies, generative AI opens the door to game-changing advances in patient care, public health, and medical innovation. But as these technologies evolve, so do the expectations around proving that AI-enabled medical devices stay safe, effective, and reliable; not just at launch, but as they learn and operate in real-world settings over time.
FDA is explicitly asking how it can better define and measure:
Performance of continuously learning models in real-world settings;
Reliability across diverse patient populations and clinical conditions;
Comparability across different models and deployment environments;
Drift detection, post-market surveillance, and change control protocols.
These questions are core to building trustworthy AI in healthcare and could lay the groundwork for future pathways like conditional or tiered approvals, real-world performance monitoring, and adaptive AI regulatory frameworks.
What FDA Is Asking For?
The RFC outlines several areas where public feedback is urgently needed:
Evaluation Across the Lifecycle
FDA seeks insights on how AI-enabled devices can be measured at all stages: during development and pre-market review; after deployment in real-world settings; and as they are updated, retrained, or modified over time.
Performance Metrics and Benchmarks
What kinds of quantitative and qualitative measures can appropriately assess safety and effectiveness? How should these metrics account for evolving data, edge cases, and shifting clinical workflows?
Transparency and Comparability
How can the performance of different AI-enabled devices be compared and communicated; especially when algorithms are updated frequently or tailored to specific populations?
Monitoring for Drift and Degradation
FDA is particularly interested in feedback on how to detect when a model’s performance declines due to “model drift” or external changes (e.g., new diseases, shifting demographics, or healthcare practices).
Stakeholder Roles
The agency wants input on how patients, clinicians, device sponsors, and health IT developers can participate in ongoing oversight, especially when models evolve after market entry.
Call To Action
“Building risk mitigation directly into your AI development roadmap will be critical. ”
Organizations across the AI and healthcare ecosystem, especially digital health companies, AI developers, medical device manufacturers, and hospital systems; should view this RFC as a critical opportunity to help craft the benchmarks, monitoring tools, and trust frameworks to shape the FDA’s evolving regulatory approach to AI in healthcare.
Contributing thoughtful, evidence-backed, and practical recommendations can:
Influence future AI risk classification and approval pathways;
Encourage innovation-friendly oversight that supports continuous improvement;
Promote safe and equitable deployment of AI across patient populations.
As AI continues to redefine what’s possible in healthcare, from diagnostics to personalized treatment, regulation must evolve in step. The FDA’s call for input isn’t a formality; it’s a signal that the agency is actively listening to innovators, clinicians, and startups to help build a smarter, more agile oversight framework.
From Compliance to Influence
The FDA’s RFC allows industry stakeholders an opportunity to help set the ground rules for how adaptive, AI-enabled medical technologies will be reviewed, monitored, and trusted. To seize this moment, companies need more than technical expertise. You need strategic insight into how the FDA thinks about lifecycle oversight, risk mitigation, and how to position your AI solution as part of the path forward.
At Legislogiq, we work with startups and AI innovators to go beyond regulatory checklists and step into the driver’s seat of AI policy. Beyond filing, we help clients build coalitions with aligned innovators and facilitate follow-up engagement with FDA officials, ensuring your perspective continues to shape ongoing policy conversations well after the comment period closes.