Trends, Challenges, and Opportunities in Health Tech AI
- Aaron Hillman
- Mar 3
- 7 min read

Why Operationalizing Governance Has Become the Industry’s Central Conversation
By Charlotte Kalafut, CEO & Co-Founder, Asher Informatics PBC
Published by Asher Informatics • asherinformatics.com
The pace of change in health tech AI over the last 18 months has been staggering. Over 1,250 AI-enabled medical devices now hold FDA authorization. Ambient AI scribes have become the fastest technology adoption in healthcare history. Agentic AI systems are entering clinical workflows. And a regulatory wave; federal, state, and international, is reshaping the obligations of every organization that deploys these tools.
For executives navigating this landscape, the question is no longer whether to adopt AI. It is how to manage, monitor, and govern what has already been adopted. That shift, from adoption to operationalization, is the defining strategic challenge in health tech today.
This article surveys the trends driving that shift, the challenges they are creating, and the opportunities available to leaders who act now.
The Trends: Where Health Tech AI Stands Today
1. The FDA-cleared AI device market has hit critical mass
As of mid-2025, the FDA lists over 1,250 AI-enabled medical devices authorized for marketing in the United States. In 2025 alone, 295 new AI/ML devices received clearance, a record-breaking year. Roughly 75 to 80 percent are in radiology and medical imaging, with cardiology accounting for about 10 percent and the remainder spanning neurology, pathology, and other specialties.
$29 billion invested in healthcare AI in 2024.
These are not pilot programs. These are tools running in production, at scale, making or informing clinical decisions every day—helping radiologists catch findings at 2 AM, triaging stroke cases so patients reach the right care faster, and identifying arrhythmias in seconds.
2. Ambient AI is the fastest technology adoption in healthcare history
Ambient AI scribes, tools that listen to patient-clinician conversations and auto-generate clinical documentation, generated $600 million in revenue in 2025, growing 2.4 times year over year. A study published in early 2026 found that nearly two-thirds of U.S. hospitals on Epic systems (62.6 percent) have adopted ambient AI tools, representing roughly 1,744 hospitals.
Kaiser Permanente rolled out Abridge’s ambient solution across 40 hospitals and 600 medical offices—the largest generative AI rollout in healthcare and their fastest technology implementation in over 20 years. A JAMA Network Open study found that clinician burnout decreased from 52 percent to 39 percent after just 30 days of ambient scribe use. The clinical and operational benefits are measurable and real.
3. AI is expanding well beyond FDA-cleared devices
Generative AI is now used for patient messaging, prior authorizations, referral letters, and in-basket management. Agentic AI—systems that execute multi-step tasks with limited human supervision—is entering medication management, care coordination, and operational workflows. Even the FDA itself deployed agentic AI capabilities for its own staff in December 2025.
Under the 21st Century Cures Act, many of these tools are explicitly excluded from FDA device oversight because they are classified as clinical decision support that a clinician can independently review. The result is an enormous and fast-growing universe of clinical AI that has never been through any form of regulatory review.
4. Regulation is arriving—fast and from multiple directions
The FDA issued comprehensive draft guidance in January 2025 applying a Total Product Lifecycle approach to AI-enabled device software. The Colorado AI Act—one of the first comprehensive state AI laws—explicitly holds deployers, not just vendors, liable for the AI systems they operate. The ACA Section 1557 now imposes obligations on health systems using algorithms to assess and characterize bias. The Joint Commission has signaled it will incorporate AI governance standards into accreditation reviews. And the EU AI Act is already in effect.
This is no longer a “somebody should do something” conversation. This is a “you are already liable” conversation.
5. The conversation has shifted from adoption to governance
Advocate Health evaluated over 225 AI solutions and went live with 40 use cases. That level of deployment is increasingly common. The question on every executive’s mind is no longer whether to adopt—it is how to govern what is already live. Operationalizing governance has become the central strategic challenge in health tech AI.
The Challenges: What Those Trends Are Creating
Each of the trends above is producing real, specific challenges that most organizations have not yet solved.
AI models degrade—and almost nobody is watching
Research published in Nature’s Scientific Reports has shown that 91 percent of machine learning models experience temporal quality degradation after deployment. Supporting research from JAMA Network Open, studying over 143,000 patients across seven hospitals, demonstrated that data shifts from changing demographics, equipment, and clinical practices substantially degraded AI performance—particularly during the COVID-19 pandemic.
84% of healthcare CIOs report significant blind spots in monitoring their clinical AI.
Most organizations have no independent mechanism to detect when a product starts underperforming. Model drift is not a theoretical risk—it is a near certainty. The question is whether you will know when it happens.
FDA clearance creates a false sense of security
Over 97 percent of AI medical devices reach market through the 510(k) pathway, which requires demonstrating substantial equivalence to a predicate device—not independent clinical validation. A 2025 JAMA study found that nearly half of FDA decision summaries for AI devices failed to report study designs, and fewer than 2 percent cited randomized clinical trials.
When an organization deploys an FDA-cleared AI product, it is relying on the manufacturer’s performance claims tested in the manufacturer’s environment. How the product performs in a different clinical setting, with a different patient population and different equipment, is an open question. And critically, FDA clearance does not satisfy the legal standards being set by state laws like Colorado’s AI Act. These are complementary frameworks, not substitutes.
Vendor monitoring has an inherent conflict of interest
The company selling an AI product is the same company reporting on how well it performs. That is not how healthcare handles any other regulated product. Medical devices have independent testing; pharmaceuticals have independent trials. General enterprise MLOps tools are not a substitute either—they were not designed for HIPAA data controls, DICOM imaging data, HL7/FHIR clinical workflows, or Joint Commission and CMS requirements.
Agentic AI multiplies risk
Because agentic AI systems operate autonomously across multiple steps, a single error can trigger a chain of incorrect actions—and accountability becomes unclear. Is it the clinician, the facility, or the developer? The industry does not yet have governance frameworks purpose-built for autonomous, multi-step clinical AI.
The equity gap is widening
An AI model that performs well at a large academic medical center may perform very differently at a 50-bed rural hospital. Different patient demographics, different equipment, different imaging protocols, different disease prevalence. Natural anatomical variations across demographic groups can be misinterpreted as pathological by AI systems trained on non-representative datasets.
The organizations most at risk are the ones that serve our most vulnerable populations—the rural hospitals, the community health centers, the federally qualified health centers. They are deploying the same products but do not have the infrastructure to monitor them. That is a health equity problem.
Most organizations have governance on paper but not in practice
Real governance operates on four levels: a governance framework (authority and accountability), policies (organizational commitments), processes (repeatable workflows), and standard operating procedures (step-by-step instructions). Most organizations have the top two layers. Very few have operationalized the bottom two—and that is where governance actually lives or dies.
The Opportunities: Why This Moment Matters for Leaders
The cybersecurity playbook exists—and it works
In the early 2010s, hospitals rushed to digitize without building the security infrastructure to protect their networks. Then WannaCry hit in 2017, affecting approximately 80 NHS trusts and cancelling thousands of appointments and surgeries. Healthcare cybersecurity is now a $20 billion market—built because the industry learned that generic IT security was not sufficient for healthcare.
The same pattern is playing out with AI governance. The organizations that recognized the cybersecurity gap early and invested built competitive advantages that lasted a decade. The same will be true for AI governance.
First movers on governance will have a regulatory advantage
The regulatory landscape is complex but convergent. The FDA’s Total Product Lifecycle approach, state deployer-liability laws, ACA Section 1557, and Joint Commission signaling are all pointing toward a clear set of expectations: independent monitoring, bias assessment, risk management documentation, and operational accountability. Organizations that build this infrastructure before it is mandated will be positioned. Organizations that wait will be scrambling—and that is always more expensive and more disruptive.
Independent, local AI monitoring is now technically achievable
One reason governance has historically been difficult is that monitoring clinical AI required building custom data science infrastructure. That is no longer the case. Purpose-built, healthcare-specific platforms now make it possible to do continuous performance monitoring, bias detection, drift analysis, and regulatory documentation without requiring an in-house data science team.
Research demonstrates that complex AI systems seldom fail suddenly. Most failures emerge over time, and their emergence is detectable before full manifestation—if the right monitoring is in place. The science and the tools exist. The question is adoption.
Governance is a competitive differentiator
Governance done well is not overhead—it is a signal of organizational maturity. Health systems that can demonstrate independent AI monitoring, documented bias assessment, and operational governance will have an advantage in payer negotiations, accreditation reviews, community trust, and recruiting. Clinicians want to work in environments where the tools they rely on are validated and monitored.
Democratizing governance solves the equity problem
If a chest X-ray AI drifts in performance at a major teaching hospital, someone will probably catch it. If the same product drifts at a critical-access hospital in rural Pennsylvania—where one radiologist may be covering three sites—who catches it?
Proper governance and independent monitoring should not be a luxury available only to well-resourced systems. The same way cybersecurity tools became accessible to organizations of all sizes, AI governance needs to be democratized. This is both a moral imperative and a market opportunity.
The Window Is Now
We have a window right now—before the first major AI-related patient safety incident, before the first state attorney general enforcement action, before the first accreditation finding—to build this infrastructure the right way.
The trends are clear: clinical AI is here at scale, it is expanding beyond regulatory oversight, and the governance conversation has moved from theoretical to operational. The challenges are real: model drift, regulatory complexity, equity gaps, and the chasm between governance on paper and governance in practice. But the opportunities are enormous for leaders who act now.
Three questions every health tech executive should be asking today:
Who is independently monitoring our AI systems?
How will we know when performance degrades?
And are we ready for the regulatory requirements that are already in effect?
About the Author
Charlotte Kalafut is CEO and Co-Founder of Asher Informatics PBC, a Pittsburgh-based Public Benefit Corporation building independent governance and monitoring solutions for clinical AI. Asher Informatics serves healthcare organizations of all sizes—from large health systems to rural hospitals and community health centers—with purpose-built tools for AI lifecycle management. Learn more at asherinformatics.com.




Comments