AI and Patient Handoffs: The Documentation Gap

Mar 15, 2026By Chester Shermer

CS

The patient handoff, or signout, is the most vulnerable moment in emergency medicine continuity of care. Every transition—shift change, ED-to-inpatient admission, ED-to-transfer—requires the receiving physician to build a clinical mental model from the sending physician's documentation and verbal summary. That process has always been imperfect. Now AI is generating and summarizing the documentation that handoffs depend on, and the failure modes are different from anything emergency medicine has previously managed.

This is not some abstract concern about technology. It is a patient safety analysis of what happens when the clinical document that the next physician reads was not written by the physician who saw the patient—and the receiving physician does not know that.

The Handoff Document as a Clinical Bridge

In emergency medicine, the handoff document serves a specific clinical function that differs from the encounter note. The encounter note is a medicolegal record of what happened. The handoff document is an operational communication tool—it tells the receiving physician what matters right now, what is pending, what to watch for, and what the clinical trajectory looks like. Those are different objectives, and they require different information architecture.

When a physician gives a verbal signout, or writes a handoff, they apply clinical judgment to determine what the receiving physician needs to know. They prioritize active problems over resolved ones. They highlight pending results that will change management. They flag the clinical concerns that are not yet in the chart—the subtle finding that does not have a lab value attached, the family dynamic that affects disposition, the clinical intuition that says something is not right despite a reassuring workup. That clinical filtering is not documentation. It is communication. And it is the part of the handoff that AI cannot replicate.

AI-generated handoff summaries pull from the structured data in the encounter note—vitals, lab results, imaging reports, medication administration records. They produce technically complete summaries that are often clinically incomplete. The pending troponin is listed, but the clinical context for why it matters—the atypical presentation, the equivocal ECG, the patient's expressed preference to leave—is absent or buried. The AI summary tells the receiving physician what happened. It does not tell them what to worry about. That distinction is where handoff failures originate.

AI Summarization and the Loss of Clinical Signal

AI summarization tools are designed to compress information. In most contexts, compression is valuable—it reduces cognitive load and accelerates decision-making. In the clinical handoff context, compression can eliminate exactly the information that matters most.

The clinical signal in a handoff is often in the uncertainty—the things the sending physician is not sure about, the differential diagnoses that have not been excluded, the clinical trajectory that could go either direction. AI summarization tools are not designed to preserve uncertainty. They are designed to produce clear, organized, confident output. When the clinical picture is genuinely ambiguous, the AI summary will resolve that ambiguity in favor of the most probable interpretation—which may not be the interpretation the sending physician intended to communicate.

Consider a patient being handed off with an undifferentiated abdominal pain presentation. The sending physician suspects early appendicitis but the CT is read as normal. The physician's clinical concern persists because the patient's exam is evolving and the CT was obtained early. An AI-generated summary will document the normal CT and the current vital signs. It is unlikely to capture the sending physician's persistent clinical suspicion—the pattern recognition that says this patient needs serial exams and is not ready for discharge despite a reassuring workup. That clinical signal, lost in the AI summary, is the signal that prevents the missed appendicitis.

The receiving physician who reads the AI summary without that context will manage the patient differently than the sending physician intended. Not because the AI was wrong. Because the AI was incomplete in a way that changed the clinical decision-making downstream.

The Accountability Gap at Shift Change

The medicolegal framework for handoff failures in emergency medicine has historically centered on whether the sending physician communicated adequately and whether the receiving physician acted appropriately on the information provided. When AI generates the handoff document, a third party enters that accountability chain—and neither existing malpractice frameworks nor hospital policies have caught up.

If an AI-generated handoff summary omits a critical clinical concern that the sending physician verbalized but the AI did not capture, and the receiving physician misses the diagnosis because the written summary did not contain the information, the liability analysis becomes genuinely complex. Did the sending physician meet the standard of care by communicating verbally but relying on AI for the written summary? Did the receiving physician meet the standard of care by relying on a written summary they reasonably believed was authored by the sending physician? These questions do not have settled answers—and they will be answered in courtrooms before they are answered in policy documents.

The fundamental problem is that most receiving physicians currently have no reliable way to know whether the handoff document they are reading was written by a physician, generated by AI, or some combination of both. That ambiguity creates a trust problem. Clinical handoffs depend on the receiving physician trusting that the document reflects the sending physician's clinical judgment. When AI mediates that communication without transparency, the trust is misplaced—not because the AI is unreliable, but because the receiving physician is reading the document as if it contains clinical judgment when it contains data summarization.

What You Should Be Doing Now

Know whether AI summarization tools are active in your department's handoff workflow. If your EHR generates automated patient summaries, shift reports, or handoff documents, understand what data sources those summaries draw from and what they exclude. The gap between what the AI includes and what you would include in a handoff is your patient safety exposure.

Establish a personal practice of supplementing AI-generated handoff documents with your own clinical commentary. If the AI summary captures the data but misses your clinical concern, annotate it. A single sentence—'Concerned for early appy despite negative CT, needs serial exams'—preserves the clinical signal that the AI summary eliminates. That sentence may be the most important documentation you produce during the encounter.

Advocate for transparency labeling on AI-generated handoff documents in your department. The receiving physician should know whether the document they are reading was physician-authored, AI-generated, or a hybrid. That labeling changes how the document is read—and it should.

Integrate handoff quality into your department's AI governance process. If your department is deploying AI documentation tools, the handoff workflow should be explicitly addressed in the governance framework. Handoff failures are a leading cause of adverse events in emergency medicine. Adding AI to that workflow without governance is adding risk to the highest-risk transition in the department.

Dr. Chet's Take:

I hand off patients every shift. When I leave in the morning after a night shift, the physician taking over my department is building their clinical picture from what I documented and what I said. That transition is where patients fall through cracks—and it has been that way for my entire career. AI documentation tools are now sitting in the middle of that transition, summarizing encounters they did not witness and compressing clinical nuance they cannot evaluate. In my programs—Telehealth, HEMS, Critical Care Transport—the handoff is not just a shift change. It is a transfer of command authority over a patient's care. When I hand off a critically ill patient to a transport team, I need to know that the document they receive contains my clinical assessment, not an algorithm's data summary. I treat AI-generated handoff documents the way I treat an intelligence briefing in the military: useful as a starting point, dangerous as a final product. The clinical judgment has to come from the physician. If the AI summary does not contain what you would tell the next doctor face to face, it is not a handoff—it is a data dump. And data dumps get patients hurt.

Dr. Chester “Chet” Shermer, MD, FACEP is a Professor of Emergency Medicine, Medical Director for Air Medical and Critical Care Transport programs, and a military medical commander with the Army National Guard. He is the founder of Global MedOps Command and the creator of AI in Emergency Medicine: Becoming AI Bulletproof.

AI Won't Wait. Neither Should You.

The handoff failures described in this post are happening now, in departments where AI documentation tools are active in the workflow without explicit governance for clinical transitions. Emergency physicians who understand how AI mediates their handoff communication will protect their patients—and themselves—more effectively than those who assume the AI summary is sufficient. Consider enrolling in my course: AI in Emergency Medicine: Becoming AI Bulletproof—a physician-built course covering AI documentation risk, clinical decision support accountability, and the command frameworks you need to practice confidently in an AI-integrated environment.

Learn more: AI in Emergency Medicine: Becoming AI Bulletproof