When a clinical AI tool gives a wrong recommendation, who is accountable? A framework for closing the governance gap before it costs a patient.
By Dr. Sarah Matt, MD, MBA | March 10, 2026
In the spring of 2023, a large academic medical center deployed a sepsis prediction algorithm widely considered best-in-class. It had been validated. It was FDA cleared. It had an 80-percent sensitivity rate, which is genuinely impressive.
Within six months, nursing staff were ignoring 40 percent of its alerts.
Not because the AI was wrong. Because no one had told them what to do when the AI was right.
There was no protocol defining what a positive alert required. There was no clarity on who had authority to escalate, and under what timeline. There was no feedback mechanism so the team could report what happened when they did act. The AI generated a signal. The governance structure generated silence. And in clinical environments, silence kills momentum faster than bad data.
This is the accountability gap. And it is the most underdiscussed problem in healthcare AI right now.
When health systems build AI governance policies, they tend to solve for two things: regulatory compliance and institutional liability. The question driving most governance documents is: "If something goes wrong, can we demonstrate we followed a defensible process?"
That is not an irrelevant question. But it is the wrong question to lead with.
The question that matters clinically is different: When this tool gives a wrong recommendation, who knows it, who is accountable for the outcome, and does the clinician using it know that accountability structure before they make a decision?
Most AI governance frameworks cannot answer that question in plain language. I have reviewed governance charters from health systems across four states in the past 18 months. The language is thorough on vendor selection criteria, data use agreements, and pilot approval processes. It is thin on accountability at the point of care.
The gap is structural, not malicious. Governance is typically designed by committees: IT leadership, legal, compliance, sometimes a Chief Medical Officer who reviews documents but is not seeing patients. What is missing is the practicing clinician who is actually using the tool at 2 AM, under cognitive load, on the eighth patient of a shift, and needs to know in five seconds what happens if she overrides the recommendation.
In my consulting work with health systems evaluating or scaling AI deployments, three patterns repeat.
Failure 1: Accountability assigned to the institution, not to a person.
Governance documents routinely say things like "the health system assumes clinical responsibility for AI-assisted decisions." This is legally defensible and operationally meaningless. An institution cannot be accountable at the bedside. A physician, nurse, or clinical pharmacist can be. Every AI tool deployed in a clinical workflow needs a named individual who owns clinical accountability for its use in that specific context. Not a title. Not a committee. A person.
Failure 2: Override authority exists but is not exercised.
Most AI tools have override functions. Most governance frameworks acknowledge that clinicians can and should override when clinical judgment conflicts with an AI recommendation. What few frameworks address is what happens after an override. Is the override logged? Does it trigger a review? Does the data go back to anyone who can use it to improve the model? In most health systems I have worked with, overrides disappear into the EHR with no feedback loop. The clinician who overrode correctly has no mechanism to improve the system. The AI keeps making the same error. This is not an AI problem. It is a governance infrastructure problem.
Failure 3: Governance is built for launch, not for the lifecycle.
A clinical AI tool deployed in 2023 is not the same tool in 2025. Models get updated. Patient populations shift. The EHR integration changes. Clinical workflows evolve. Governance frameworks that were approved at launch and never revisited are governing a system that no longer exists. I have seen organizations running clinical AI for 18 months whose governance committee has not met since the initial go-live. The tool was updated twice during that period. No clinician reviewed either update for clinical appropriateness. This is governance theater. It signals compliance and delivers none.
Before a health system deploys any clinical AI tool, or before they renew a contract for a tool already in use, I recommend working through four questions with clinical leadership.
Question 1: Can you name, by role and by person, who is accountable when this tool is wrong?
Not "the clinical team." Not "the ordering physician." The specific individual whose accountability is triggered when this specific tool generates a recommendation that harms a patient. If that person does not know they are accountable, the governance structure is incomplete.
Question 2: What is the escalation pathway when the AI and the clinician disagree?
This should be documented in two sentences or fewer. If it takes a paragraph to explain the override process, it will not be used under pressure. Clinicians making decisions under time and cognitive constraints will default to the path of least resistance. Governance must make the right path the easy path.
Question 3: Does your feedback loop close?
When a clinician overrides an AI recommendation and the patient outcome supports the override, does that information reach the people responsible for model governance? Does it go anywhere at all? A feedback loop that does not close does not exist.
Question 4: Who reviewed the last model update?
If the answer is "the vendor certified it," that is a starting point, not an endpoint. Vendor validation confirms performance on vendor data. It does not confirm performance in your clinical context. Every material update to a deployed clinical AI tool should trigger a review by a practicing clinician with authority to halt deployment if clinical appropriateness is in question.
The regulatory landscape is accelerating. California's SB 351 established that private equity-backed management organizations cannot interfere with physician clinical judgment. Fourteen other states are watching. The AMA's creation of 26 new CPT codes for AI-assisted services signals that payers are beginning to formalize how AI participation in clinical decisions gets documented and reimbursed. The ONC's HTI-1 rule is expanding AI transparency requirements across EHR vendors.
Governance is no longer optional or aspirational. It is becoming the legal baseline.
Health systems that have built governance for optics will spend the next 18 months rebuilding it for operations. Health systems that built it right the first time will spend that same period scaling tools that their competitors are still debating.
The accountability gap is closeable. It is not primarily a technology problem or a budget problem. It is a structural design problem, and it is fixable with the right questions, the right authority assignments, and the discipline to close feedback loops that most organizations leave open.
When a health system client asks me where to start on AI governance, I give them one instruction before we discuss frameworks, vendors, or policy language: find the person who is accountable when the AI is wrong, and ask them if they know that.
In about two-thirds of cases, they do not.
That is your starting point. Not the policy document. Not the committee charter. The conversation with the clinician who is using the tool right now, who has been told the AI is validated and FDA cleared, and who has never been told that clinical accountability for its recommendations sits with them.
Fix that conversation first. The governance framework will be more useful once the people it governs know it exists.
Dr. Sarah Matt is the author of The Borderless Healthcare Revolution (Wiley, 2025) and founder of Vital Werks, a healthcare strategy and technology consulting firm. She advises health systems, digital health companies, and PE-backed platforms on clinical AI strategy, governance design, and care delivery transformation.
Subscribe to The Sarah Matt Briefing for exclusive frameworks and consulting insights I do not publish anywhere else: drsarahmatt.com/newsletter-signup