The 120-Day Clock Is Ticking: What California's AI Executive Order Means for Health System Leaders
- John Kalafut

- Apr 14
- 8 min read
Updated: Apr 16
California just changed the rules for healthcare AI procurement. Here's what CIOs, CMIOs, and governance leaders need to do before the certification framework lands.
John F. Kalafut, PhD, Co-Founder & CTO

A White Paper from Asher Informatics Leadership - April 2026
Executive Summary
On March 30, 2026, Governor Newsom signed Executive Order N-5-26, a recent executive order shaping AI procurement standard that may prove to be one of the most significant AI governance actions by any U.S. state to date. While headlines focused on the political contrast with federal AI deregulation, health system leaders should be paying attention to something more concrete: a 120-day window in which state agencies must deliver recommendations for new vendor AI-related certification requirements that may be used in state contracting. That window closes in late July 2026.
If your organization is deploying clinical AI, whether medical imaging algorithms, clinical decision support, ambient documentation, or workflow automation, the downstream effects may reshape how AI vendors demonstrate safety, privacy, and governance to the state, and by extension, to every health system those vendors serve.
What the Executive Order Actually Requires
EO N-5-26 directs the California Department of General Services (DGS) and Department of Technology (CDT) to develop new certification requirements for AI vendors seeking state contracts. Companies may be required to attest to, and explain, policies and safeguards across three pillars:
Bias governance. Vendors would need to demonstrate governance mechanisms to mitigate harmful bias in their AI models. The expectation is documented processes, testing frameworks, and mitigation strategies, moving well beyond broad commitments to fairness principles.
Civil rights protection. Attestations would cover safeguards against unlawful discrimination, unauthorized surveillance, and violations of civil liberties including free speech and voting rights. For healthcare vendors and health systems, this suggests a growing expectation of demonstrable equity across patient populations.
Content safety. Measures to prevent distribution of illegal content, including CSAM and non-consensual intimate imagery. While less directly applicable to clinical AI, this pillar signals the breadth of governance California expects.
The order also directs state agencies to facilitate employee access to vetted GenAI tools, publish a data minimization toolkit, and issue watermarking guidance for AI-generated content. Separately, the Government Operations Agency must recommend reforms to contractor responsibility provisions, including suspension authority for vendors that undermine privacy or civil liberties.
Why This Matters Beyond State Agencies
Here's the strategic insight that many analyses miss: EO N-5-26 is an AI executive procurement order, but its real impact is a market-shaping force.
California is home to 33 of the world's top 50 privately held AI companies, according to the Governor's office. When the state sets certification requirements for AI vendors, those vendors will not build separate governance programs for their California contracts and their health system contracts. They build one program. The California standard becomes the de facto floor.
If this sounds familiar, it should because California has played this role before. The state's vehicle emissions standards, first adopted under the Clean Air Act waiver in the 1960s, were initially a California-only requirement. Over time, more than a dozen states adopted California's standards, and automakers began designing to those standards nationwide. The same pattern played out with window tinting regulations, fuel efficiency benchmarks, and zero-emission vehicle mandates. California regulates for California, and the market follows. In the diagnostic imaging industry we witnessed a similar pattern with Ionizing Radiation Dose management and reporting statutes. There is every reason to expect the same dynamic with AI governance could play out in the broader US market.
For health systems, including the University of California health systems (UCSF, UCI Health, UC Davis Health, UCLA Health, UCSD Health), the implications are layered. It is important to note that UC health systems, as part of a constitutionally autonomous entity under Article IX of the California Constitution, may not automatically be bound by gubernatorial executive orders in the same way as executive branch agencies. The Regents of the University of California operate with broad self-governance authority, and the legal distinction between UC and state executive agencies is a meaningful one.
That said, the practical reality narrows the gap. UC health systems participate in state-funded programs, use state procurement vehicles, and contract with the same AI vendors who will now need California certification. As these governance standards become embedded in vendor practices and potentially codified in legislation, the question for UC health leadership is timing: will you be ahead of the curve, or catching up?
California's Expanding Healthcare AI Regulatory Landscape
The order lands alongside California healthcare AI laws such as AB 3030 and SB 1120, forming an accelerating regulatory landscape that is already reshaping how healthcare providers deploy AI in California:
AB 3030 (effective January 2025) requires covered healthcare entities using generative AI for written or verbal patient communications about patient clinical information to include disclaimers and instructions for contacting a human healthcare provider when those communications haven't been reviewed by a licensed or certified human clinician. This demands systems that track which communications are AI-generated and which have received human review.
SB 1120 regulates the use of AI, requiring individualized clinical information, human clinical oversight, and medical necessity determinations under health plans be made by licensed physicians or qualified health professionals. For utilization review, the human-in-the-loop is now law.
AB 2575 (introduced in the 2025-2026 session and amended April 9, 2026) would, if enacted, require health facilities deploying clinical decision support tools to provide written notice about covered AI tools used in patient care including key information about the tool and notice that direct care workers may override AI recommendations when appropriate. As with any pending legislation, its final form and timeline remain subject to change.
Together with CA EO N-5-26, these are creating a governance landscape where health systems increasingly need documented, auditable processes for bias testing, human oversight, transparency, and vendor accountability. The era of voluntary best practices is giving way to operational and legal necessity.
What "Governance-Ready" Actually Looks Like
Many health systems have AI governance in name: a committee that meets quarterly, or they have policy documents that reference fairness principles in broad strokes. California's evolving regulatory landscape demands something more operationally. It requires a Governance management system that produces auditable artifacts, maps to specific controls, and integrates across procurement, clinical operations, risk management, and compliance.
At Asher Informatics, we have built our Clinical AI Governance Framework, a Process Reference Model structured around 11 process domains, 39 base practices, and detailed standard operating procedures that produce traceable work products. Here is how that maps to what California is now demanding:
For bias governance (EO Pillar 2): Our Ethics and Fairness domain includes four dedicated practices covering algorithmic bias testing, demographic and protected class impact analysis, individual patient-level fairness evaluation, and bias mitigation and remediation. These are SOPs with defined roles, quality gates, and output artifacts including bias assessment reports and demographic impact analyses. Our Risk Management domain reinforces this with an integrated safety, efficacy, and bias assessment practice that produces documented evidence.
For civil rights and equity (EO Pillar 3): The framework addresses protected class analysis with explicit disparate impact calculation, data minimization protocols and consent management systems (in our IT Systems domain), and human oversight mechanisms with defined escalation pathways and override capabilities, directly supporting the SB 1120 and AB 2575 requirements. Our Oversight domain provides the governance committee structure and accountability reporting that ties it all together.
For transparency (supporting AB 3030 and AB 2575): Our Transparency domain covers explainability protocol development, decision audit trails, and stakeholder communication frameworks. A unified audit trail system, shared across monitoring, transparency, and usage governance domains, provides the infrastructure to track what was AI-generated, what was human-reviewed, and what decisions were made at each step.
For procurement and vendor governance: The framework's ISO 42001:2023 control mappings provide standards alignment that AI vendors will increasingly need to demonstrate. When your governance program speaks the same language as the certification framework your vendors are attesting to, due diligence becomes systematic rather than ad hoc.
You Need Tools, Not Just Policies
Knowing what governance looks like is one thing. Building and operationalizing it across a complex health system is another challenge entirely. Policies on paper need to become auditable evidence trails. Quarterly committee meetings need to evolve into continuous monitoring programs. Health systems need tooling that bridges the gap between governance intent and governance execution.
This is why we built the Asher Informatics AI Governance Studio, the solution in our AshMatics Suite that brings the Process Reference Model to life. The Studio is designed around two operational modes:
Mode 1, Building and Configuring, guides your organization through a structured, agent-assisted authoring process. It begins with Blueprint Creation, a multi-step wizard that captures your organizational context, such as regulatory targets, accreditation goals, maturity scoring, and governance references. From there, our studio's agentic wizards translate your blueprint into implementation workplans across five capability areas: Strategy and Leadership, Risk, Ethics and Compliance, Data, Model and System Lifecycle, Operations and Workflow, and Vendor and Ecosystem. The output is a complete, configured governance program with policy documents, process workflows, and dashboards, ready to operationalize.
Mode 2, Operationalizing, is where governance becomes a living system. Documentation activities generate new specific domain artifacts aligned to existing institutional management policies. Workflow execution activates process graphs (DAGs) that combine human tasks with automated actions, collecting evidence at each governance touchpoint. Ongoing governance provides program-level compliance dashboards, agent-assisted alert triage, report generation for leadership and regulators, and iterative refinement cycles.
The framework scales with your organization's maturity and regulatory ambition. A Tier 1 Essential implementation can be stood up in days with 4 policies and 3 SOPs. A Tier 4 Regulatory implementation, appropriate for organizations navigating multiple state AI requirements, delivers 10 or more policies and 18 or more SOPs over a structured engagement.
Five Actions for Health System Leaders, Now
The certification framework recommendations will not be finalized until late July 2026. Waiting for the final language, however, is a strategic mistake. Here is what governance-forward health systems should be doing in the next 90 days:
Inventory your clinical AI deployments. Map every AI tool in clinical use, from medical imaging algorithms to ambient documentation to prior authorization tools. Document the responsible vendor, the clinical workflow it touches, and the patient populations it affects. You cannot govern what you have not cataloged.
Assess your vendor governance posture. For each AI vendor, determine whether they can currently demonstrate bias testing methodologies, civil rights safeguards, and content safety policies. Start asking vendors for their governance documentation now, before the certification framework gives them a template to fill in.
Establish or operationalize your AI governance committee. If your committee meets quarterly to review policies, that is not sufficient. Operational governance means regular review cycles tied to deployment decisions, performance monitoring, and incident response. Define roles, decision authority, and escalation protocols.
Audit your AB 3030 and SB 1120 compliance. These are already law. Ensure you can demonstrate that AI-generated patient communications carry appropriate disclaimers and that medical necessity determinations involve qualified human professionals.
Build your governance evidence trail. When the CA EO's certification framework lands, you will want documented evidence of bias assessments, equity analyses, human oversight protocols, and vendor due diligence. Start generating work products now. The time for planning is over.
The Window Is Open
California has a long history of setting standards that the rest of the country eventually adopts. For health systems that invest in operational AI governance now, this moment represents a genuine competitive advantage. Governance-ready organizations will move faster through AI procurement cycles, face less regulatory friction, attract AI vendors willing to partner on long-term clinical validation, and build institutional trust with patients and clinicians that makes AI adoption sustainable.
The organizations that act now will be the ones defining what responsible healthcare AI looks like for the next decade.
John F Kalafut PhD is Co-Founder and Chief Technology and AI Officer of Asher Informatics a healthcare AI governance company helping health systems Health AI Enterprise Management Systems enabling AI adoption. The AI Governance Studio, part of the AshMatics Suite, operationalizes the Asher Informatics Process Reference Model, delivering structured processes, standard operating procedures, and auditable work products aligned with ISO 42001:2023 and emerging U.S. state regulatory requirements. Learn more at asherinformatics.com.






Comments