Most AI vendor contracts put all liability on physicians. Learn five critical clauses to renegotiate before signing, from a clinical leader who has negotiated them.
By Dr. Sarah Matt, MD, MBA | April 7, 2026
You have been a physician for years. You have signed contracts with employers, insurance panels, hospital credentialing boards, and equipment vendors. None of them looked like this.
But when you walk into an AI vendor negotiation, the first thing you see is a 40-page services agreement full of language that has nothing to do with clinical safety. It has everything to do with liability assignment.
Most physicians do not realize they are negotiating two things at once: the clinical capabilities of the tool and the legal framework that determines who pays if something goes wrong. The vendor is crystal clear about the second one. Physicians usually miss it entirely.
Here is the dangerous part: by the time you realize what happened, the tool is already in your workflow and the contract is signed.
Clinical AI adoption is accelerating. Most health systems are running pilots or building deployment timelines. Most physicians on those teams have never negotiated an AI vendor contract before.
They know how to review a pilot's clinical data. They do not know how to read the liability language that says the health system is responsible if the tool makes a bad recommendation.
This is not dishonesty on the vendor's part. It is standard tech contracting. But tech contracting was not built for environments where a software error can harm a patient.
The five clauses in this guide will not make you a lawyer. They will make you dangerous to a vendor's standard contract. Dangerous in the way that matters: you will know what to ask for, and you will know how to negotiate for terms that protect the health system instead of just the vendor.
What it says:
The indemnification clause tells you who is liable if the AI tool causes harm. Most vendor contracts say they are not liable for the clinical decisions you make based on the tool's output. Specifically, they exclude liability for: how you implemented the tool, how your clinicians used it, whether your team validated the output before acting on it. That is all on you.
Why it matters:
In clinical practice, the physician and health system are the fiduciaries responsible for patient safety. But the vendor's contract says they are not responsible for the most critical question: did the tool perform as expected in your clinical workflow? A good indemnification clause protects you for vendor errors (the tool was buggy, the vendor did not disclose a known safety issue). A bad one shifts all responsibility to you for how you used the tool.
What to ask for:
Request that the vendor acknowledge liability for: bugs in the tool that cause incorrect clinical output, failures to disclose known safety issues, performance degradation that the vendor did not warn you about. Make sure the contract says the vendor is responsible for the accuracy of their training data. Ask for escrow: if the vendor disappears, you get your money back and the tool source code.
What it says:
The warranty clause is where the vendor promises the tool works as described. Most AI vendor contracts have extremely weak warranties. They often say something like: "The tool is provided as-is. We do not warrant it will perform perfectly. We do not warrant it will work for your specific use case." This is called a "non-warranty."
Why it matters:
You would never accept "as-is" from a cardiac monitor manufacturer. You would never say "we will integrate this into our OR workflow but the vendor is not promising it will work." But AI vendor contracts often contain exactly that language. The reason is that most vendors believe AI is unpredictable and they should not be held to performance standards. That is a vendor problem, not your problem. If the vendor cannot stand behind their tool's performance in a clinical setting, you should be very cautious about integrating it.
What to ask for:
Request specific performance metrics in the warranty. Ask for minimum accuracy guarantees. Request that the vendor warrants the tool will perform in your specific workflow before it goes live. Ask for a trial period where you can verify performance before the tool goes into production. Make the warranty say: "The tool will perform within these exact metrics in your specific clinical context," not "the tool probably works."
What it says:
This clause limits how much the vendor will pay if something goes wrong. Most vendor contracts say: "The vendor's total liability will not exceed the amount paid in the first 12 months of the contract. If you paid 50K, we are liable for a maximum of 50K, even if the tool causes a 10 million dollar harm."
Why it matters:
If an AI tool gives bad advice and a patient is harmed, the damages are medical malpractice damages. Those are not 50K. They are often in the millions. A vendor contract that caps liability at 50K is asking you to cover the rest.
What to ask for:
Request that the liability cap be much higher, or that it does not apply to patient harm and malpractice claims. At minimum, ask for the cap to increase if the contract value increases. Ask whether the vendor carries professional liability insurance and what the policy limits are. Make sure the contract says that limitation of liability does not apply to indemnification for vendor errors or patient safety breaches.
What it says:
The termination clause tells you what happens if you want to stop using the tool. Can you leave if you are unhappy? What happens to your patient data? Most vendor contracts have extremely long termination windows (90+ days notice, with 30 additional days of transition). And most say the vendor owns your data or has a perpetual license to use it.
Why it matters:
Clinical workflows are built on continuity. If you integrate an AI tool into your workflow and then the vendor disappears, gets acquired, or the tool fails, you need to be able to exit quickly. A 90-day termination window means 90 days of operational disruption. And if the vendor owns your data, they own insights about your patient population, your clinical decisions, your workflow patterns. That is not just a business problem. It is a clinical privacy problem.
What to ask for:
Request 30 days or less for termination. Ask for the right to take your data with you in a standard format (CSV, HL7, or direct export to your EHR). Request that the vendor deletes your data after termination (not stores it indefinitely). Ask for confirmation that the vendor will not use your clinical data to train other models without explicit consent. Request a transition period where you and the vendor work together to safely exit the tool.
What it says:
This clause allows the vendor to update the tool, change its functionality, or discontinue it. Most vendor contracts say they can make updates without notifying you or getting your approval. Some say they can discontinue the tool with 90 days notice.
Why it matters:
If you build a clinical workflow around an AI tool and the vendor pushes an update that changes how it works, your workflow breaks. If the vendor discontinues the tool, you have to find an alternative and re-integrate it into your EHR and your clinical processes. That is not a minor inconvenience. That is a clinical disruption.
What to ask for:
Request that major updates (changes to algorithms, changes to output format, changes to clinical performance) require your approval or at least 30 days advance notice so you can validate the changes clinically. Request that the vendor will not discontinue the tool for at least two years (or longer, depending on how critical it is to your workflow). Ask for the right to keep using an older version of the tool if a new version does not meet your clinical requirements. Request a clear service level agreement: if the tool is down for more than 4 hours, you do not pay.
These five clauses are not random. They flow from one principle: the vendor should be accountable for the clinical performance of their tool.
That is not how tech contracts are usually written. Tech contracts usually assume the vendor builds the product and the customer figures out how to use it safely. But in clinical practice, the vendor and the health system share responsibility for patient safety. The contract should reflect that.
The vendors who move fastest with health systems are the ones who get this. They are willing to negotiate these clauses because they are confident in their tool's performance. They are willing to stand behind it.
The vendors who refuse to negotiate, or who will not budge on liability caps and warranty language, are the ones who are not confident. That is a signal.
You do not need a lawyer to walk through these five clauses. You need someone on your team (a physician, a clinical leader, or your procurement team) who understands the clinical context and can ask the right questions.
Here is a simple process:
The negotiation usually takes two to three rounds. Most vendors are willing to move on the language once they understand you are serious and informed. The ones who are not: that tells you something.
The real question you are asking before you sign an AI vendor contract is not: "Will this tool help my workflow?"
The real question is: "If this tool fails, who is responsible?"
If the answer is "you are," and the vendor will not negotiate that, then you do not yet have a contract safe enough to sign. The five clauses in this guide give you the language and the authority to ask for better terms. Use them.
Download the free guide: 5 Contract Clauses Every Physician Should Demand Before Signing a Clinical AI Agreement
Ready to dig deeper: Schedule a 20-minute advisory call on clinical AI evaluation and contracting
Dr. Sarah Matt is a physician-leader and advisor on clinical AI implementation. She works with health systems on AI evaluation, vendor selection, and governance frameworks.