AI at the Crossroads of Care: What the 2025 AI Action Plan Means for Healthcare and Public Health
The White House’s 2025 AI Action Plan marks a pivotal moment in U.S. AI policy, doubling down on innovation, infrastructure, and a redefined approach to oversight. While much of the national focus has centered on defense, industry, and emerging technologies, the implications for healthcare and public health are equally urgent. AI is already reshaping care delivery and population health. The policies that guide its adoption will help determine whether that transformation builds trust or deepens inequities.
AI Moves Into the Mainstream of Health
Across the healthcare and public health landscape, AI is no longer a distant concept. It’s becoming part of daily workflows, particularly in settings where time, accuracy, and efficiency matter most. In healthcare delivery, hospitals like HCA Healthcare are implementing AI-assisted charting tools to improve nurse handoffs and reduce preventable harm. Oracle’s AI-powered agents are supporting clinicians with ambient documentation and real-time decision support, helping reduce administrative burden and streamline patient care.
Public health departments are increasingly experimenting with AI tools, often with a focus on practical, resource-aware applications. The Association of State and Territorial Health Officials (ASTHO) fielded a rapid-response survey to its Informatics and Data Modernization Network (IDMN), formerly the Informatics Directors Peer Network (IDPN), made up of leaders from state and territorial health agencies. Of the jurisdictions that responded, more than a third reported using AI in some capacity. Most described using generative AI to draft reports and communications, analyze data anomalies, support workforce planning, or streamline administrative tasks like job descriptions and grant applications.
In a recent ASTHO podcast episode, Approachable AI for Public Health, Truc Taylor, Director of Health Data Analytics at Guidehouse, reflected on why public health leaders are drawn to this moment of possibility:
“AI isn’t about replacing expertise, but about amplifying it… AI can give us better, faster situational awareness—whether that’s in tracking outbreaks, predicting resource needs, or identifying vulnerable populations.”
Globally, AI is already reshaping the way health systems respond to high-priority challenges. The World Economic Forum and recent academic reviews have documented how AI is being used to improve maternal and neonatal outcomes, accelerate disease diagnosis, predict outbreaks, and optimize health resource distribution in low- and middle-income countries. In clinical medicine, AI is helping identify cancer, stratify risk among complex patient populations, and reduce diagnostic delays.
Both globally and locally, these use cases point to a growing recognition that AI can play a critical role in improving health outcomes. In addition, they also raise essential questions about our readiness to meet this critical moment. The success of these tools depends not just on their design, but on the infrastructure, governance, and capacity that surround them.
Health Leaders See Promise–but Caution Remains
As AI moves from concept to implementation, leaders across healthcare and public health are weighing its benefits with measured caution. The American Medical Association’s Augmented Intelligence Research Report reveals a profession that’s increasingly curious, but still navigating real concerns. Physician use of AI nearly doubled from 38% in 2023 to 66% in 2024. Yet only 36% of respondents report feeling more excited than concerned about about its role in clinical care.
“Physicians still have key needs to build trust and advance adoption of AI.”
Many physicians are optimistic about how AI could help streamline care. Nearly 7 in 10 physicians say AI offers at least some advantage to their ability to care for patients, and 57% view reducing administrative burden through automation as its greatest opportunity. Still, trust remains a barrier. Nearly half (47%) of physicians ranked increased oversight as the top regulatory action needed to boost confidence in adopting AI tools.
In public health, similar themes are emerging. Leaders are calling for intentional adoption that prioritizes explainability, equity, and alignment with foundational public health values. As discussed in ASTHO’s podcast episode, Approachable AI for Public Health, agency leaders and data experts spoke about AI’s potential to enhance workflows, improve surveillance, and support planning but emphasized that these tools must complement, not replace, human expertise. Across both sectors, one message is clear: successful AI implementation and adoption will depend as much on trust and accountability as it will on technical capability.
How the AI Action Plan Lays the Groundwork for Health Innovation
The 2025 America’s AI Action Plan lays out a bold federal vision for AI development and adoption, centered around three strategic pillars: accelerating innovation, building foundational infrastructure, and strengthening global leadership. While much of the narrative has focused on defense and economic growth, the plan also carves out an important role for healthcare and public health.
Healthcare is one of the few sectors explicitly named for targeted federal deployment efforts, signaling a shift from theoretical potential to applied implementation:
“Launch several domain-specific efforts (e.g., in healthcare, energy, and agriculture)... to accelerate the development and adoption of national standards for AI systems and to measure how much AI increases productivity at realistic tasks in those domains.”
By creating performance benchmarks and aligning standards, the plan lays the foundation for scalable, interoperable AI tools across health systems supporting use cases such as clinical decision support, outbreak detection, and streamlined administrative operations.
The plan also places a strong emphasis on workforce readiness. Its approach to AI education goes beyond engineering and computer science to include domain-specific training for clinicians, public health professionals, health IT staff, and data stewards As AI tools move from pilot to practice, this kind of cross-disciplinary capacity building will be critical to ensuring systems are not only functional but trusted and adopted.
Finally, its infrastructure investments in supporting domestic semiconductor manufacturing, data center permitting, and next-generation compute capacity can offer longer-term value for health systems. With concerns like data privacy, secure storage, and system reliability top of mind, these investments can help ensure AI is deployed in a way that’s both sustainable and trusted.
When Innovation Outpaces Oversight
While the 2025 AI Action Plan strengthens the national AI infrastructure, it also rolls back several key protections once emphasized in Executive Order (EO) 14110, signed just two years prior. These changes have raised concerns, particularly in sectors like healthcare and public health, where algorithmic decisions directly influence diagnoses, treatment plans, and access to services.
One of the most notable shifts is the revocation of EO 14110, which had prioritized civil rights protections, algorithmic fairness, and safeguards against discriminatory outcomes. The new plan also prohibits federal agencies from requiring the use of diversity, equity, and inclusion (DEI) filters or misinformation classifiers in AI tools developed with federal funds. These measures, once seen as essential for reinforcing public trust, are now absent from the policy framework.
Additionally, the Action Plan introduces a funding penalty for states that enact AI regulations deemed too burdensome, an approach that could complicate state-level public health governance where local context often matters most.
“The Plan prohibits federal agencies from requiring AI developers to use DEI filters or misinformation classifiers… and it imposes a funding penalty on states that create their own AI rules deemed burdensome to business.”
Critics argue that such moves risk weakening critical guardrails — especially in health, where equity, safety, and transparency are foundational. As noted in Health Affairs, public health leaders emphasize the need for transparency in how AI tools are developed and used, warning that without clear governance, these technologies may erode public trust and widen existing disparities.
This concern is explored in more detail in Legisloqiq’s companion analysis by Farhan Chughtai, which examines how the plan’s deregulatory shifts create both momentum and ambiguity across sectors.
These changes signal a broader policy tension at the heart of AI governance: how do we accelerate innovation without undercutting the ethical safeguards that protect real people?
What Health Leaders Should Be Asking
As national and state AI strategies move forward, health leaders must be ready, not just technologically, but ethically and operationally. Proposed legislation like Pennsylvania’s House Bill 2394, which would create an AI registry and establish guidelines for transparent and equitable use, signals that governance is no longer optional. The same applies across healthcare delivery and public health systems.
The policies are shifting, the tools are evolving, and the pressure to act is rising. The real challenge? Knowing where to start.
Here are the questions leaders in these sectors should begin asking today:
Do we have a governance model for AI use in our system or agency?
Are clinicians, staff, and community partners meaningfully involved in AI decision-making?
How are we addressing potential bias in data or algorithms?
Is our infrastructure ready to support real-time AI applications?
Are we aligned with both state and federal policy changes, and prepared to adapt?
A Call for Balance: Innovation with Intention
AI is no longer a distant frontier in healthcare and public health. It’s here, shaping everything from documentation to diagnostics, outbreak surveillance to resource allocation; however, as AI becomes more embedded in decision-making, the stakes grow higher.
Success in this space will not be measured by speed of adoption alone. It will hinge on our ability to build trust, prioritize ethics, and design governance structures that keep pace with both innovation and impact. Getting AI right in health means staying curious, acting with intention, and remaining accountable to the people and populations we serve.
At LegisLogiq, we help organizations navigate the fast-moving world of A.I. regulation with clarity, creativity, and foresight. Whether you’re exploring policy compliance, advocacy, or looking to redefine your A.I. strategy, our team is here to help. From messaging guidance to partnership opportunities, contact us and someone from our team will connect with you