AI Literacy in Emergency Medicine: What You Need to Know
CS
Medically reviewed and authored by Chester Shermer, MD, FACEP | Updated April 2026 | 18 min read
Emergency physicians using AI tools without understanding how they fail are accepting liability they may not realize they carry. This guide gives you a practical foundation in AI literacy for the ED: how algorithms learn, where they hallucinate, how bias enters triage decisions, and what you must know before signing off on AI-generated clinical notes.
What Is AI Literacy in Emergency Medicine?
AI literacy in emergency medicine is a physician's working knowledge of how artificial intelligence systems function, where they characteristically fail, and how to apply that understanding to protect patients and maintain clinical accountability in the emergency department. It is not a technology skill — it is a clinical competency.
The distinction matters because the ED is a high-stakes, time-pressured environment where a miscalibrated triage algorithm, a fabricated clinical note, or a biased sepsis prediction tool can harm patients before anyone recognizes the error. A 2025 national survey of ACEP members found that 38% of emergency physicians already identified potential biases as a concern with AI tools — yet fewer than half reported having access to educational support or guidelines on safe AI use.[¹]
At a Glance:
-- What it is: The clinical competency to understand AI capabilities, failure modes, and accountability in emergency care
-- Who it affects: All emergency physicians, PAs, NPs, and department leaders who interact with AI-integrated clinical workflows
-- Clinical significance: AI errors in the ED can cause delayed or incorrect triage, diagnostic misses, documentation inaccuracies, and medicolegal exposure
-- Key failure modes: Hallucination, algorithmic bias, black-box opacity, automation bias
-- Governing standards: ACEP AI Committee guidelines, March 2026 All-EM AI Consensus Statement, FDA AI-Enabled Device regulatory framework
How AI Works in Clinical Settings
You do not need to be a data scientist to use AI tools responsibly. You do need to understand the basics of how these systems acquire their knowledge and why that process creates predictable failure patterns.
Training Data: Where the Algorithm's World Begins
Every AI system in clinical use — whether it's flagging abnormal chest X-rays, predicting sepsis onset, or generating your discharge summary — was built by feeding it a large dataset of past examples. The algorithm learned by finding statistical patterns in that data. A sepsis prediction model trained primarily on academic medical center data from a single health system will learn the patterns of that population: its demographics, its documentation habits, its lab-ordering practices. Deployed in your rural or safety-net ED, it is now extrapolating far outside its experience.
This is the core problem of training data bias: the model is not wrong — it is precisely correct about the population it learned from. It is applied incorrectly when that population doesn't match yours.
Supervised vs. Unsupervised Learning: What the Algorithm Is Actually Optimizing For
Most clinical AI tools use supervised learning: human experts labeled thousands of examples ("this chest X-ray shows pneumonia," "this patient developed sepsis within 6 hours"), and the algorithm learned to predict those labels. The critical point is that the algorithm is optimizing to predict the label — not the underlying clinical reality. If historical labels encoded human bias (e.g., undertriage of Black patients with chest pain), the algorithm will reproduce that bias at scale.
Unsupervised learning and large language models (LLMs), which generate clinical notes and summaries, work differently: they learn statistical patterns in text without explicit clinical labels. This is why LLMs can produce grammatically perfect, clinically confident output that is factually wrong — a phenomenon known as hallucination.
What "FDA-Cleared" Actually Means
As of 2024, the FDA had reviewed 882 AI-enabled medical products since 1995, with 154 identified as applicable to emergency medicine practice — primarily imaging-based tools in radiology, cardiovascular, and neurology panels.[²] "FDA-cleared" through the 510(k) pathway means the device was found substantially equivalent to a predicate device. It does not mean the tool was proven superior to existing clinical judgment, prospectively validated in your patient population, or shown to improve outcomes in diverse EDs. Roughly 97% of AI medical devices cleared by the FDA used the 510(k) pathway, which is a lower evidentiary bar than de novo or PMA approval.[³]
Where AI Fails: Hallucination, Bias, and Black Box Problems
Emergency physicians encounter three primary failure modes in clinical AI. Understanding each one is foundational to safe use.
1. Hallucination: Confident Fabrication
AI hallucination occurs when a language model generates output that is plausible in form but factually incorrect or entirely fabricated. In clinical documentation tools, hallucinations are not random errors — they tend to cluster in the plan section of clinical notes, where the stakes are highest. Research using the CREOLA framework found that while hallucinations occurred at a 1.47% overall rate in LLM-generated clinical documentation, 44% of those hallucinations were classified as "major" errors — including negation hallucinations that contradict what was said during the clinical encounter.[⁴]
In practical terms: an AI scribe might document that you told the patient not to take a medication you actually prescribed, or fabricate a symptom the patient never reported. If you sign that note, it becomes your documentation.
Studies estimate hallucination rates in AI clinical decision support systems range from 8% to 20% depending on model complexity and training data.[⁵] A 2025 multinational study of foundation models found that even top-performing AI systems retained non-trivial hallucination rates after mitigation strategies — and that medical-specialized models performed worse than general-purpose ones, despite domain-specific training.[⁵]
2. Algorithmic Bias: Systematic Inequity at Scale
Algorithmic bias in the ED is not theoretical. A retrospective analysis of 297,355 adult ED visits found that Black patients had an adjusted odds ratio of 0.76 for high-acuity triage compared to White patients, and Hispanic patients had an adjusted odds ratio of 0.87 — disparities most pronounced for subjective chief complaints including chest pain, dyspnea, and pain.[⁶] If an AI triage system trained on this historical data is deployed without local validation, it will automate and scale that disparity.
The ACEP AI Task Force's 2025 JACEP Open report on bias identified three mechanisms through which bias enters AI systems:
1. Training data bias — historical clinical data reflects existing human disparities in care
2. Proxy variable bias — algorithms using race as a variable (e.g., kidney function estimators) encode race-based assumptions
3. Interpretation bias — clinicians may interpret AI outputs differently for different patient populations, compounding the error
A 2025 JAMA Network Open prognostic study found that intersectional debiasing approaches reduced subgroup calibration errors by an additional 5.7–11.1% and false-negative rates by 4.5% compared with standard fairness approaches — evidence that debiasing requires deliberate design, not passive intent.[⁷]
3. The Black Box Problem: When You Cannot See the Reasoning
Many high-performing AI models — particularly deep learning networks used in imaging interpretation — operate as "black boxes." The algorithm produces a recommendation or a flag, but its reasoning is opaque. You cannot see which data points drove the conclusion. This matters in the ED because you cannot clinically validate a recommendation you cannot interrogate.
Explainable AI (XAI) techniques such as Shapley Additive Explanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME) can surface which variables most influenced a model's output — but only if the vendor builds that transparency into the product.[⁸] When evaluating any AI tool, "Can you show me why the model flagged this patient?" should be a required question, not an afterthought.
4. Automation Bias: The Invisible Override
Automation bias is the tendency to defer to algorithmic output — even when your clinical judgment was correct and the AI was wrong. Research demonstrates that 67% of physicians who initially recommended against treatment changed their decision after viewing an AI recommendation in favor of treatment.[⁹] In a separate pathology study, automation bias caused initially correct evaluations to be overturned by incorrect AI guidance in approximately 7% of cases, and that rate worsened under time pressure.[¹⁰]
The emergency department is the highest time-pressure clinical environment in medicine. Automation bias is not a character flaw — it is a predictable cognitive response to working with authoritative-sounding systems under cognitive load. AI literacy means knowing you are vulnerable to it.
The YMYL Stakes: Why AI Errors in the ED Are Different
Google classifies medical content as "Your Money or Your Life" (YMYL) because the consequences of error are irreversible. The same logic applies to AI errors in the emergency department, but with compounding factors that do not exist in other settings.
Time compression. The average emergency physician has minutes — not hours — to review AI-generated documentation, verify AI triage decisions, and override AI recommendations. There is no time for manual validation of every output.
Incomplete information. ED patients arrive without complete medical histories. AI systems optimized on complete, well-documented inpatient data are frequently applied to the partial-data environment of emergency care — where the model's confidence is inversely related to the amount of available clinical information.
Documentation liability. When you sign an AI-generated clinical note, that note becomes your legal documentation. AI hallucinations in signed notes have direct medicolegal consequences. The March 2026 All Emergency Medicine AI Consensus Statement — issued by ACEP, SAEM, CORD, ABEM, AAEM, EMRA, AACEM, and AOBEM — explicitly asserts that emergency physicians retain full authority for patient care decisions and that AI must enhance, not replace, clinical judgment.[¹¹]
High-acuity, high-volume throughput. An AI triage algorithm that is 99% accurate across 200 patients per day generates two wrong classifications per shift. In a 50,000-visit-per-year emergency department, that is approximately 1,000 triage errors annually from a "highly accurate" system.
A Practical Framework for Evaluating AI Tools
Before your department adopts any AI clinical tool, you need answers to these questions. Not from the vendor's marketing materials — from the peer-reviewed validation literature.
The Five-Domain Evaluation Framework
| Domain | Key Questions | Red Flags | |||||||
| Training Data | What population trained this model? What were the demographics, geographic setting, and data completeness? | No published training data description; single-site, non-diverse cohort | |||||||
| Validation | Was the tool prospectively validated outside the training population? In an ED setting comparable to yours? | Retrospective only; validated only at academic centers; no subgroup analysis by race/ethnicity | |||||||
| Explainability | Can the system show you which inputs drove each output? Is XAI built in? | "Trust the model" as the only answer; no feature attribution available | |||||||
| Failure Mode Documentation | What types of errors does the tool make? At what rate? Under what conditions does performance degrade? | Sensitivity/specificity only; no error taxonomy; no performance data under data sparsity | |||||||
| Post-Deployment Monitoring | Does the vendor provide ongoing performance monitoring? Is there a mechanism to flag errors? | No monitoring plan; no feedback loop; performance data only from pre-deployment testing |
AI Tool Evaluation Checklist for Emergency Physicians
Before signing any AI implementation agreement, verify the following:
Clinical Validation
- Prospective validation study published in peer-reviewed journal
- Validation cohort demographically comparable to your patient population
- Subgroup performance data by race, ethnicity, age, and sex available
- ED-specific validation (not inpatient or outpatient only)
- Local validation plan included in implementation agreement
Transparency and Explainability
- Model outputs include confidence intervals or uncertainty estimates
- XAI feature attribution available at point of care
- Override mechanism documented and tested
- Staff training on when and how to override
Risk and Liability
- Contract specifies liability allocation for AI errors
- HIPAA compliance verified for any data transmitted to external AI platform
- PHI data use agreement reviewed by legal counsel
- Documentation standards for AI-assisted notes defined
Operational
- Post-deployment performance monitoring included in contract
- Error reporting mechanism for clinical staff
- Defined process for tool suspension if performance degrades
→ Deep Dive: When to Override the AI: A Framework for Emergency Physicians provides the clinical decision framework for individual cases.
What the Evidence Says
The research on AI in emergency medicine is moving rapidly. Here is where the peer-reviewed evidence stands as of early 2026.
ACEP AI Task Force: National Survey of Emergency Physicians (JACEP Open, December 2025)
The most current large-scale data on AI adoption in EM comes from the ACEP AI Task Force national survey of 658 emergency physicians.[¹] Key findings:
- 61% reported using at least some AI tools in clinical practice
- 63% of those using non-institutional AI were using ChatGPT directly in clinical contexts — without institutional vetting
- 75% believed AI improves clinical efficiency; 57% believed it enhanced care quality
- 38% expressed concern about potential biases
- Only about half desired educational support or guidelines — suggesting a significant portion of physicians using AI tools may not recognize what they do not know
The same task force published two additional JACEP Open reports: one on legal and ethical risks of generative AI platforms (including PHI data exposure), and one providing a framework for identifying and addressing bias in ED AI tools.[¹] Together, these reports represent the first professional-association-level analysis of AI use in emergency medicine.
JMIR Medical Informatics: AI and Emergency Medicine — Balancing Promise and Challenges (2025)
This peer-reviewed viewpoint paper provides a structured overview of AI applications in the ED alongside a rigorous analysis of failure modes.[⁸] Critically, the paper identifies explainable AI as a prerequisite for clinical trust — not an optional feature — and calls for structured override processes in any AI-enabled triage system. The authors note that "robust prospective trials remain limited" despite extensive early-use publications, and that large-scale multisite validation studies are needed before AI-driven solutions can be considered standard of care.
FDA-Reviewed AI Products Applicable to Emergency Medicine (American Journal of Emergency Medicine, 2025)
A systematic analysis of 882 FDA-reviewed AI products found 154 applicable to EM practice, primarily imaging tools in radiology (121/154), cardiovascular (24/154), and neurology (5/154).[²] Only 30 products achieved a "comparable or incremental net health benefit with moderate certainty" rating under the ICER Evidence Rating Matrix. The study identifies significant opportunities for EM physicians to participate in product review and for more meaningful clinical translation before widespread adoption.
Medical Hallucinations in Foundation Models (arXiv, 2025)
A multinational study benchmarking hallucination rates in foundation AI models found that medical-specialized models achieved hallucination-free responses at a median rate of 28.6–61.9% — performing significantly worse than general-purpose models despite explicit medical training.[⁵] The study identifies hallucination as a reasoning-driven failure mode rather than a knowledge deficit, which has direct implications for how ED physicians should interact with and verify AI outputs.
Racial Disparities in ED Triage (Western Journal of Emergency Medicine, 2023)
Analysis of 297,355 adult ED visits found Black patients had a 24% lower adjusted odds of high-acuity triage and Hispanic patients had a 13% lower adjusted odds compared with White patients — disparities concentrated in subjective chief complaints.[⁶] This human-generated baseline bias is the substrate into which AI triage tools are being deployed. If bias existed in the training data, the AI will amplify rather than correct it.
Dr. Chet's Take
I have practiced emergency medicine for more than 25 years across high-volume urban trauma centers and community EDs. When AI tools started arriving in my department — first ambient scribes, then imaging flags, then triage decision support — my initial reaction was clinical curiosity. My second reaction, after about six months of watching how my colleagues actually interacted with these tools, was concern.
The pattern I see repeatedly is this: a busy physician gets an AI-generated clinical note that is 95% accurate. Over time, they stop reading it as carefully as they should. The automation bias literature tells us exactly why — it is a normal cognitive response to a system that is usually right. But "usually right" is not a documentation standard. And in emergency medicine, the 5% where it is wrong is almost never random. It tends to cluster in the highest-complexity patients: the atypical presentations, the patients with incomplete histories, the patients whose physiology doesn't match the training distribution. Those are the patients who can least afford a wrong note.
The ACEP AI Task Force data showing 63% of AI-using physicians are using ChatGPT directly — outside institutional vetting — is the finding that should concern every department medical director. There is no HIPAA agreement in place. There is no validation study. There is no error monitoring. There is a confident-sounding system, a time-pressured physician, and a patient who has no idea their clinical information was processed by an external AI platform.
My recommendation: before any AI tool touches a clinical workflow in your department, demand to see the validation literature. Ask what population trained the model. Ask about subgroup performance. Ask what happens when the model is wrong and who carries the liability. Those are not technology questions — they are clinical questions. They are the same questions we ask about any other clinical intervention.
The providers who will thrive through this transition are not the ones who adopt AI fastest or who resist it most. They are the ones who understand it clearly enough to deploy it safely, override it confidently, and explain their decisions to a peer review committee or a jury. That clarity starts with AI literacy.
— Chester Shermer, MD, FACEP | Emergency Medicine, 25+ Years Clinical Experience
Full bio and credentials → | LinkedIn
Frequently Asked Questions
Q: What is AI literacy in emergency medicine, and why does it matter?
A: AI literacy in emergency medicine is a physician's working knowledge of how AI systems function, where they characteristically fail, and how to maintain clinical accountability when AI tools are involved in patient care decisions. It matters because emergency physicians who sign off on AI-generated clinical notes, AI-assisted triage decisions, or AI-flagged diagnoses carry legal and ethical responsibility for those outputs — regardless of whether the AI was wrong.
Q: What are the most dangerous AI failure modes for emergency physicians to understand?
A: The four primary failure modes in clinical AI are: (1) hallucination — AI-generated output that is factually wrong or fabricated, most commonly in documentation systems; (2) algorithmic bias — systematic errors that disproportionately affect minority patient populations due to biased training data; (3) black-box opacity — AI recommendations with no explainable reasoning that clinicians cannot interrogate or validate; and (4) automation bias — the tendency for clinicians to defer to AI output even when their own clinical judgment was correct. All four appear in emergency medicine AI tools currently in clinical use.
Q: How do I know if an AI tool has been properly validated for emergency medicine use?
A: Look for peer-reviewed prospective validation studies that tested the tool in an ED setting, on a patient population demographically comparable to yours. Ask specifically for subgroup performance data by race, ethnicity, age, and sex. An AI tool validated only in academic medical centers or on retrospective data from a single health system has not been properly validated for general EM deployment. See the AI Tool Evaluation Framework for a full clinical evaluation checklist.
Q: Am I legally liable if an AI tool generates a wrong note that I sign?
A: Yes. Under current legal frameworks, the physician who signs a clinical note bears responsibility for its content, regardless of whether AI assisted in generating it. The March 2026 ACEP All-EM AI Consensus Statement affirms that emergency physicians retain full authority for patient care decisions. AI errors documented in signed notes can constitute the basis for malpractice claims. See AI Malpractice Liability for Emergency Physicians for a full medicolegal analysis.
Q: What is automation bias, and how does it affect emergency physicians specifically?
A: Automation bias is the tendency to follow AI recommendations even when doing so overrides a previously correct clinical judgment. Research shows that 67% of physicians who initially recommended against treatment changed their recommendation after seeing an AI output recommending treatment. In emergency medicine — the highest-volume, highest-time-pressure specialty — automation bias is a predictable cognitive response, not a character flaw. AI literacy training specifically addresses how to maintain independent clinical judgment while working with AI tools.
Q: Does FDA clearance mean an AI tool is safe and effective for my emergency department?
A: FDA clearance means the device met regulatory requirements for marketing, most commonly through the 510(k) pathway — a substantial equivalence standard, not a clinical superiority standard. Of 882 AI-enabled products FDA-reviewed through 2024, only 154 were identified as applicable to emergency medicine, and only 30 achieved a rating of comparable or incremental net health benefit with moderate certainty. FDA clearance is necessary but not sufficient evidence of clinical readiness for your specific patient population.
Q: How does AI bias enter ED triage systems?
A: AI triage bias typically enters through three mechanisms: (1) training data encoded with historical human disparities in triage accuracy; (2) proxy variables that correlate with race without explicit racial labeling; and (3) interpretation bias, where clinicians may respond differently to the same AI output for different patient groups. A 2023 study found Black patients had a 24% lower adjusted odds of high-acuity triage and Hispanic patients had a 13% lower adjusted odds compared with White patients — the human-generated bias that AI triage tools can automate and amplify at scale. See Algorithmic Bias in ED Triage for the full analysis.
Q: What training does ACEP recommend for AI literacy in emergency medicine?
A: The ACEP AI Task Force's 2025 national survey found that about half of emergency physicians desire educational support and guidelines on AI use — and that fewer than half currently have access to them. ACEP's permanent AI Committee now focuses on developing educational resources and advocacy for AI equity. At the institutional level, the ACEP/SAEM letter to the NIH calls for formalized AI training, certification, and licensure as part of workforce development. For physician-level foundational training, Harvard Medical School launched its AI in Emergency Medicine CME course (December 2026, 11.75 AMA PRA Category 1 Credits) targeting frontline clinicians and department leaders.
Q: What should I do before my department adopts a new AI clinical tool?
A: Before any AI tool is deployed in your department's clinical workflow: demand peer-reviewed validation literature; verify the training population matches your patient demographics; require subgroup performance data by race, ethnicity, age, and sex; have legal counsel review data-sharing agreements for HIPAA compliance; define documentation standards for AI-assisted notes; establish an override protocol; and build post-deployment performance monitoring into the contract. Browse Dr. Shermer's AI courses at Global MedOps Command →
Q: How is AI literacy in emergency medicine different from general digital health literacy?
A: General digital health literacy covers using technology tools safely — EHRs, telehealth platforms, digital communication. AI literacy goes deeper: it requires understanding how a system learned what it knows, why that learning process produces systematic errors, and how to maintain clinical accountability when an algorithm participates in care decisions. In emergency medicine, this translates to practical skills: reading an AI validation study critically, identifying when a clinical note contains a hallucination, recognizing when an AI triage recommendation reflects training data bias, and knowing when to override a recommendation and how to document that decision.
Medical Disclaimer
This content is intended for licensed medical professionals, EMS personnel, and trained emergency responders. It does not constitute personalized medical advice. Clinical protocols and AI evaluation frameworks referenced are for educational purposes and should be adapted to your jurisdiction's scope of practice and applicable medical direction. For patient care, always follow your agency's protocols and consult medical direction as required.
Continue Your Training
Structured Courses:
Everything covered in this guide is built into Dr. Shermer's clinical training programs — scenario-based, protocol-driven, and designed for emergency physicians who need to work confidently in AI-integrated environments.
→ Browse All Courses at Global MedOps Command
Clinical Reference eBooks:
How to Avoid Becoming an AI Casualty
Navigate AI tools in clinical and operational settings without compromising judgment or patient outcomes. Written for emergency physicians, by an emergency physician.
The Emergency Medicine Observation Unit
Evidence-based framework for observation unit operations, patient flow optimization, and clinical decision protocols.
Emergency Department Efficiency Playbook
Practical systems for throughput, triage optimization, and operational efficiency from 25+ years in high-volume emergency departments.
Connect with Dr. Shermer on LinkedIn:
Chester "Chet" Shermer, MD, FACEP
References
Shy BD, Baloescu C, Faustino IV, et al. Early Insights Among Emergency Medicine Physicians on Artificial Intelligence: A National, Convenience-sample Survey of the American College of Emergency Physicians. J Am Coll Emerg Physicians Open. 2025;7(1):100308. Published 2025 Dec 26. doi:10.1016/j.acepjo.2025.100308. https://pubmed.ncbi.nlm.nih.gov/41536575/
Friedman AB, Garg R, Lin Z, et al. FDA-reviewed artificial intelligence-enabled products applicable to emergency medicine. Am J Emerg Med. 2025 Mar;89:241-246. doi:10.1016/j.ajem.2024.12.062. Epub 2024 Dec 25. https://pubmed.ncbi.nlm.nih.gov/39755027/
Joshi I, Morley J. Artificial Intelligence: How to Get It Right. An analysis of FDA-approved AI/ML-enabled medical devices. NHSX Policy Lab. Referenced in: IntuitionLabs AI. AI Medical Devices: 2025 Status, Regulation & Challenges. https://intuitionlabs.ai/articles/ai-medical-devices-regulation-2025
Richter V, Marshall R, et al. A framework to assess clinical safety and hallucination rates of LLMs for clinical documentation. npj Digital Medicine. 2025. doi:10.1038/s41746-025-01670-7. https://www.nature.com/articles/s41746-025-01670-7
Mullangi S, et al. Medical Hallucination in Foundation Models and Their Impact on Healthcare. arXiv. 2025. https://arxiv.org/html/2503.05777v2
Takayama A, Gao C, et al. Racial Differences in Triage for Emergency Department Patients with Subjective Chief Complaints. West J Emerg Med. 2023;24(5):894-902. https://pmc.ncbi.nlm.nih.gov/articles/PMC10527826/
Hong C, et al. Intersectional and Marginal Debiasing in Prediction Models for Emergency Admission. JAMA Network Open. 2025;8(5). doi:10.1001/jamanetworkopen.2025.XXXXX. https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2834551
Rao A, Kiani A, et al. Artificial Intelligence (AI) and Emergency Medicine: Balancing Promise and Challenges. JMIR Med Inform. 2025;13:e70903. doi:10.2196/70903. https://medinform.jmir.org/2025/1/e70903
Ryan P, et al. The Importance of Understanding AI's Impact on Physician Behavior. npj Mental Health Research. 2025. https://pmc.ncbi.nlm.nih.gov/articles/PMC12718312/
Hekler A, et al. Automation Bias in AI-Assisted Medical Decision-Making under Time Pressure. arXiv. 2024. https://arxiv.org/html/2411.00998v1
American College of Emergency Physicians. Leading EM Organizations Issue Consensus Statement on Artificial Intelligence in EM. ACEP Newsroom. March 18, 2026. https://www.acep.org/news/acep-newsroom-articles/3-18-26-leading-em-organizations-issue-consensus-statement-on-artificial-intelligence-in-em
Harvard Medical School Professional, Corporate, and Continuing Education. AI in Emergency Medicine. Two-day CME course, December 3–4, 2026. https://learn.hms.harvard.edu/programs/ai-emergency-medicine
Published by Global MedOps Command | globalmedopscommand.com
"Prepared for Every Emergency. Educated for Every Challenge."