AIM Media House

Have AI Regulations in Healthcare Fallen Behind?

Have AI Regulations in Healthcare Fallen Behind?

AI is embedded across clinical workflows in US healthcare, but federal regulation has not caught up. States are stepping in — with fragmented results.

Artificial intelligence has moved from the edges of healthcare to its operational core. It is embedded in clinical decision support, diagnostics, administrative workflows, and increasingly in coverage determinations by health insurers. The regulatory infrastructure that governs it has not kept pace.

According to a legal alert published by Holland and Knight LLP, federal legislation has not caught up with the pace of AI deployment in healthcare. Congress has not passed any significant legislation directly addressing AI in healthcare, leaving state legislatures to act first.

More than 250 AI-related healthcare bills were introduced across state legislatures in 2025, with a consistent focus on patient disclosure, bias prevention, clinician accountability, and restrictions on AI use in insurance coverage decisions.

The risk of that state-level activity is fragmentation. Different bias and discrimination requirements across jurisdictions create a compliance environment that could constrain healthcare organisations operating nationally, forcing them to maintain different standards for the same technology depending on where it is deployed.

The Trump Administration released its National Policy Framework for Artificial Intelligence on March 20, 2026, asking Congress to establish a single federal regulatory approach with specific guardrails on child safety, free speech, intellectual property, workforce impacts, and national security.

The framework also seeks to codify elements of the December 11, 2025 executive order on AI, which sought to preempt state-level activity. Executive orders, however, do not carry the force of law needed to override state statutes.

Within the Department of Health and Human Services, agency-level action is advancing incrementally. The FDA is clarifying how existing authorities apply to AI-enabled technologies, expanding low-risk pathways for digital health products, and moving toward lifecycle oversight that includes post-deployment monitoring and performance tracking.

CMS is testing AI integration through payment models and demonstrations, including the WISeR model, which pilots AI and machine learning to support prior authorisation in traditional Medicare.

The CDC released guidance specifically on agentic research tools on March 12, 2026, addressing AI systems that autonomously plan and execute multi-step research tasks.

The guidance emphasises human oversight and clearly defined use cases. NIST continues to shape AI governance through voluntary standards and technical frameworks

What Healthcare Organisations Are Watching

Across HHS agencies, four common priorities are emerging: auditability of AI outputs, traceability of data sources and model versioning, human oversight in clinical and public health contexts, and ongoing performance monitoring as models evolve.

The near-term trajectory points toward continued state-federal divergence, with lower-risk consumer tools facing lighter oversight and clinical decision-making applications facing greater scrutiny.

Generative AI remains a specific gap where existing frameworks do not fully address its newer use cases. For healthcare organisations already running AI across documentation and clinical support workflows, the compliance question is no longer hypothetical.

Key Takeaways

  • Recognize that federal regulation of healthcare AI is lagging behind rapid technological deployment.
  • Acknowledge states are introducing over 250 AI-related bills focusing on patient protection and bias prevention.
  • Understand the risks of regulatory fragmentation, complicating compliance for national healthcare organizations.
  • Note the Trump Administration's call for a unified federal regulatory framework for AI in healthcare.