AI in Emergency Medicine: Your Triage Is Already Obsolete
CS
Discover how AI is transforming emergency medicine triage. Dr. Shermer shares 25 years of EM insights on becoming AI bulletproof. Start today.
It is 0300 on a Tuesday. Your department is holding 14 admitted patients. Twenty-two names are on the waiting room board. Your triage nurse flags a 58-year-old male with chest pain as ESI-3. Forty minutes later, the troponin comes back critically elevated.
Nobody made a mistake. The nurse did her job. The ESI system did what it was designed to do. The problem is that the constellation of risk sitting in that patient's EHR — the prior visits, the medication list, the subtle vital sign trend — never surfaced at the triage desk when it could have changed the decision. The system failed to synthesize what it already knew.
That scenario plays out in emergency departments across this country every single shift. And it is exactly the problem AI-augmented triage is built to solve.
The ESI Is Hitting Its Ceiling
The Emergency Severity Index was designed for a different era of emergency medicine. It works reasonably well when a trained nurse has time, bandwidth, and situational awareness. We do not practice in those conditions. We practice in departments running at 120% capacity, with nurses covering ratios they should never be asked to manage, and a waiting room that does not pause while they think.
The ESI is a snapshot. It captures what the patient tells you and what you can see in two minutes. It does not know that this patient's last three visits were chest pain complaints. It does not know his home medications include metformin and a beta blocker he probably stopped taking. It does not know his troponin trajectory from six months ago.
AI-augmented triage tools do know those things — because they pull them in real time.
A 2024 study in Annals of Emergency Medicine demonstrated that machine learning clinical decision support models outperformed ESI in predicting 24-hour admission, ICU transfer, and critical intervention needs, with an AUC of 0.87 versus 0.74 for ESI alone. That gap is not marginal. That is the difference between flagging the septic patient in your waiting room and missing him until he deteriorates.
These tools are not experimental. They are live, integrated systems deployed in health systems from HCA to Providence, pulling EHR data in real time and generating dynamic acuity scores that update as new information flows in. If you are still treating this as a future-state conversation, you are behind.
What AI Triage Actually Does at the Bedside
The framing that irritates me most is the one that positions AI as a replacement for clinical judgment. It is not. A well-designed AI triage tool is a co-pilot for your triage nurse — one that has already read the chart before the patient sits down.
In one department, they piloted an AI-augmented triage overlay eighteen months ago. The system pulls chief complaint, vitals, medication history, prior visit patterns, and lab trends to generate a composite acuity score. It does not make the disposition decision. It makes sure the relevant data is visible before that decision is made.
In the first six months, the average door-to-provider time for high-acuity patients dropped by eleven minutes. In STEMI and stroke care, eleven minutes changes outcomes. That is not a vanity metric.
That being said, the tool is only as good as the data it was trained on and the workflow it is integrated into. Which brings me to how you evaluate these tools before they touch your department.
How to Evaluate AI Tools Without Getting Burned
Every vendor in healthcare IT is labeling their product "AI-powered" right now. Some of it is genuinely useful. A lot of it is marketing. Your job as a medical director is to tell the difference before you sign a contract.
Start with the training data. Ask the vendor directly: what patient population did this model learn from? What was the acuity distribution? What EHR system? A model trained exclusively on a suburban community ED will underperform in a Level I trauma center, and vice versa. If the vendor cannot answer those questions specifically, that is your answer.
Demand transparency on the output. Binary recommendations — "admit" or "discharge" — are red flags. A well-built clinical AI gives you a risk score, a confidence interval, and the key features driving the prediction. If you cannot see why the model is flagging a patient, you cannot meaningfully evaluate or override it. You become a rubber stamp, and that is a liability.
Ask about failure modes. Every AI system will be wrong some percentage of the time. The critical question is whether it fails safe. A triage AI that occasionally over-triages is manageable. One that systematically under-triages a specific demographic — by age, race, sex, or payer status — is a patient safety catastrophe and a lawsuit. Ask for disaggregated performance data. If they do not have it, they have not done the work.
Finally, insist on a parallel-run period before any autonomous decision-making goes live. Ninety days, minimum. Run the AI alongside your current process. Compare outputs. Identify every case where the algorithm and your clinicians disagree, and adjudicate those cases manually. That is where the real learning happens — for you and for the system.
The Risk Nobody Wants to Talk About
Here is the clinical reality that does not get enough attention. The most significant risk of AI in emergency medicine is not that the technology fails. It is that it works well enough to erode your own clinical skills through disuse.
The pattern recognition I have built over 25 years — the ability to look at a patient from the doorway and know something is wrong before a single vital sign prints — that skill was forged by repetition, by error, and by thousands of hours at the bedside. AI cannot give you that. And if AI starts doing the cognitive heavy lifting early enough in a physician's training, the next generation may never develop it.
We have a direct analogy in aviation. When autopilot systems became sophisticated enough to handle most flight conditions, pilot hand-flying skills measurably declined. The FAA responded by mandating minimum hand-flying hours. Emergency medicine needs to have this conversation now, before it becomes a crisis.
My recommendation is what I would call the AI audit habit. Once per shift, identify a patient where the AI has generated a recommendation. Before you look at the output, write down your own assessment — chief complaint, differential, risk stratification, predicted disposition. Then compare. Where do you agree? Where do you diverge? When you diverge, who was right?
This takes three minutes. Over the course of a year, it keeps your clinical reasoning sharp while building your operational understanding of where AI adds value and where it falls short. It is the single most important practice habit I can offer any physician working in an AI-augmented environment.
Three Actions for This Week
Audit your current triage process. Identify the data points that are being lost between the front door and the provider assessment. Those gaps are exactly where AI will integrate — you should know them before a vendor does.
Start the AI audit habit now. One patient per shift, your assessment before the algorithm's, documented and compared.
Demand accountability from any vendor pitching AI tools to your department. Training data source, failure mode analysis, and disaggregated performance metrics are non-negotiable. If they cannot produce them, keep shopping.
The physicians and administrators who understand this technology — how it works, where it fails, and how to integrate it without becoming dependent on it — will be the ones leading departments in 2030.
Dr. Chet's Take:
I want to be direct about something the literature does not fully capture. When I look at a 58-year-old male at 0300 with chest pain, I am not running a conscious algorithm. I am drawing on pattern recognition that was built case by case, shift by shift, over two and a half decades. The look of someone who is sick. The subtle diaphoresis. The way a patient describes pressure differently than pleuritic pain. AI can synthesize the EHR. It cannot yet replicate the gestalt of a clinician who has seen ten thousand undifferentiated chest pain presentations.
That being said — and I mean this — I have seen what happens when a good AI overlay surfaces information that was sitting in the chart and nobody had time to pull. The 11-minute improvement in door-to-provider time I cited above is real, and it came from a relatively straightforward system augmenting a competent triage nurse, not replacing her. The tool does not make her smarter. It makes sure she is not flying blind. In a department holding 14 admitted patients at 0300, that matters more than I would have predicted before we ran the pilot.
For administrators reading this: the question is not whether to adopt AI-augmented triage tools. The question is whether your medical director is at the table when those decisions get made, or whether this is being driven by IT and a vendor relationship. The clinical oversight piece is not optional. These tools interact directly with patient safety, and the physicians managing your department need to own the evaluation process. If that governance structure is not in place, build it before you sign anything.
Dr. Chester "Chet" Shermer, MD, is an emergency physician, HEMS medical director, and founder of Global MedOps Command. His course, AI in Emergency Medicine: Becoming AI Bulletproof, is available at globalmedopscommand.com.
Get his free AI in EM Survival Guide today:

Medical Disclaimer
This content is intended for licensed medical professionals, EMS personnel, and trained emergency responders. It does not constitute personalized medical advice. Clinical protocols and AI evaluation frameworks referenced are for educational purposes and should be adapted to your jurisdiction's scope of practice and applicable medical direction. For patient care, always follow your agency's protocols and consult medical direction as required.
Continue Your Training
Structured Courses:
Everything covered in this guide is built into Dr. Shermer's clinical training programs — scenario-based, protocol-driven, and designed for emergency physicians who need to work confidently in AI-integrated environments.
→ Browse All Courses at Global MedOps Command
Clinical Reference eBooks:
How to Avoid Becoming an AI Casualty
Navigate AI tools in clinical and operational settings without compromising judgment or patient outcomes. Written for emergency physicians, by an emergency physician.
The Emergency Medicine Observation Unit
Evidence-based framework for observation unit operations, patient flow optimization, and clinical decision protocols.
Emergency Department Efficiency Playbook
Practical systems for throughput, triage optimization, and operational efficiency from 25+ years in high-volume emergency departments.
Connect with Dr. Shermer on LinkedIn:
Chester "Chet" Shermer, MD, FACEP