Prompt Engineering for EM Docs
CS
AI tools only work as well as the instructions you give them.
It is 2:47 a.m. and Bay 6 is a mess. A 58-year-old male presents with chest pain, diaphoresis, and a troponin that is borderline. His EKG shows subtle ST changes in the inferior leads, but nothing screaming. You pull up an AI assistant on the workstation — something you have been using more and more lately — and you type: "chest pain workup."
The AI gives you a wall of generic text about the differential diagnosis of chest pain. You already know that. What you needed was something like: "For a 58-year-old male with borderline troponin, subtle inferior ST changes, and ongoing chest pain, walk me through the risk stratification decision points between ACS and a low-probability PE, including the HEART score and Wells criteria, and flag any red flag features I should not miss."
That second prompt? That is prompt engineering. And for emergency physicians, it is rapidly becoming one of the most high-yield clinical skills you are not being formally taught.
What Is Prompt Engineering — and Why Should EM Care?
Prompt engineering is the practice of crafting inputs to a large language model (LLM) so that the output is clinically useful, contextually accurate, and actionable. It is not a technical skill reserved for software developers. It is a communication skill — and emergency physicians are already expert communicators under pressure.
Here is the practical reality: LLMs are trained on massive amounts of text, but they respond to context. The more clinical context you provide, the more clinically specific the answer. A vague prompt gets you a vague answer. A precisely framed prompt — one that establishes the patient scenario, the decision you are facing, the constraints you are working under, and the format you need — gets you something you can actually use.
Emergency medicine is uniquely positioned to benefit from this. You work fast, in high-stakes environments, with incomplete information. AI tools, used correctly, can function as a second attending in your pocket — capable of synthesizing evidence, flagging decision branch points, and even structuring your documentation. But only if you know how to ask.
The Four Pillars of a Strong Clinical Prompt
Every strong clinical prompt in emergency medicine rests on the same four components. Think of it as a SOAP note for your AI query.
1. Role Assignment
Tell the AI who it is. "Act as an emergency medicine attending with expertise in toxicology." This activates the model's relevant knowledge domain. Without this framing, the AI will respond as a generalist, giving you the kind of answer you might find in a first-year medical school textbook.
2. Clinical Context
Give it the patient scenario. Age, sex, vital signs, chief complaint, relevant history, key lab values, and pertinent negatives. The richer the clinical context, the more targeted the output. Think of this as the HPI. Garbage in, garbage out — but specificity in, specificity out.
3. Explicit Task
State exactly what you want. Do you want a differential? A management plan? A structured discharge summary? A list of drug interactions? If you do not specify the task, the AI will guess — and its guess will often be too broad.
For example: "Generate a structured differential diagnosis prioritized by likelihood and risk, not alphabetically. Limit to the top five diagnoses for this presentation."
4. Output Format
Tell the AI how to deliver the answer. Bullet points? A table? A narrative paragraph? A two-sentence summary? Format directives dramatically improve usability, especially when you are reading a response at 3 a.m. between patients. You might add: "Respond in bullet points. Be concise. Assume I have residency-level EM training."
Prompt Patterns That Actually Work in the ED
Here are four prompt patterns with real-world ED applications. These are not hypothetical — these are approaches I have used and refined in clinical practice.
The Differential Generator
PROMPT EXAMPLE:
"You are an experienced emergency medicine attending. A 32-year-old woman presents with acute onset severe headache, 10/10, described as the worst headache of her life. BP 142/88. No fever, no neck stiffness noted on initial exam. No prior similar episodes. Generate a prioritized differential diagnosis. Include the must-not-miss diagnoses first. For each, list one key discriminating feature I should look for. Format as a numbered list."
This prompt forces the AI to prioritize the can't-miss diagnoses (subarachnoid hemorrhage, cerebral venous thrombosis) over the statistical majority (tension-type headache, migraine) — which is exactly how emergency physicians think.
The Documentation Drafter
PROMPT EXAMPLE:
"Using the following clinical information, draft an ED physician medical decision-making note in standard format. Include: clinical impression, decision points addressed, plan with rationale, and return precautions. Write at a level appropriate for the medical record. [Paste de-identified clinical data.]"
The AI does not replace your documentation — it creates a draft you review and sign. Critically, you must de-identify all patient data before entering it into any external AI tool. Understand your institution's AI governance policy before using any cloud-based LLM for documentation tasks.
The Patient Communication Translator
PROMPT EXAMPLE:
"Rewrite the following discharge instructions for a patient with a 6th grade reading level. Remove medical jargon. Use short sentences. Emphasize the three most important return precautions. [Paste your standard discharge instructions.]"
Research from Cureus (2025) found that ChatGPT significantly outperformed physicians in the interpretability of return precautions, suggesting AI's unique ability to simplify complex information without losing critical details. This is an immediately deployable use case with minimal risk and real patient benefit.
The Chain-of-Thought Reasoner
This is the most powerful and underused technique in clinical prompt engineering. Chain-of-thought prompting asks the AI to reason step by step before giving its answer — which dramatically reduces hallucinations and improves diagnostic accuracy.
PROMPT EXAMPLE:
"Think through this case step by step before giving your final answer. First, identify the most critical historical features. Second, note the vital sign abnormalities and their significance. Third, consider the most dangerous diagnoses this presentation could represent. Fourth, give your recommended immediate workup. Patient: [clinical scenario]"
By forcing the model to show its reasoning, you also get to audit that reasoning — and catch it when it misses something important. This mirrors attending-level supervision of a clinical presentation.
What AI Cannot Do — and Where EM Judgment Is Irreplaceable
Prompt engineering makes AI more useful. It does not make AI infallible.
A March 2026 study published in Nature Medicine found that ChatGPT Health under-triaged 51% of medical emergency scenarios, suggesting patients consult a doctor within 24-48 hours instead of directing them to the ED immediately. That number should give every emergency physician pause.
AI tools perform best when they are given rich context and when the physician validates the output. They struggle with:
•Cases that require integrating the patient's physical appearance and non-verbal cues
•Rapidly evolving clinical situations where status changes by the minute
•Rare diagnoses underrepresented in training data
•Legal and ethical decision-making that requires institutional knowledge
The physician-AI interface should always be bidirectional: you provide context, the AI provides synthesis, and you apply clinical judgment. The moment the AI becomes the decision-maker — rather than the decision-support tool — you have crossed into unsafe territory.
What You Should Be Doing Now
If you are an emergency physician who has been using AI tools casually or not at all, here is your immediate action list:
•Pick one AI tool you have access to — ChatGPT, Claude, Copilot — and commit to using it intentionally for five consecutive shifts.
•Practice the four-pillar framework: Role, Context, Task, Format. Write your prompts before you type them.
•Start with low-risk, high-volume tasks: discharge instruction translation, documentation drafting, literature lookups.
•Build a personal prompt library. Save the prompts that get you good results. Iterate. Refine.
•Never enter identifiable patient information into a non-enterprise AI tool. Always check your institutional data governance policy.
•Teach your residents and APPs the basics of prompt engineering. If you are the attending, you are setting the standard for how AI gets used in your department.
DR. CHET'S TAKE
I spent years watching emergency physicians complain that AI tools give useless answers. Then I watched those same physicians type three-word prompts and wonder why they got three-word answers.
The AI has not changed. The prompt has.
Emergency medicine already produces some of the best clinical communicators in medicine. We are trained to gather precise information under pressure, synthesize it quickly, and act. Prompt engineering is just applying that same discipline to a new kind of interface.
I am not asking you to become a data scientist. I am asking you to treat your AI prompts the same way you treat your clinical questions: with specificity, with purpose, and with accountability for the output.
Because right now, the physicians who are learning to communicate effectively with AI are getting measurably better results from the same tools that everyone else says don't work.
That gap is only going to widen.
— Dr. Shermer
About the Author
Chester "Chet" Shermer, MD is an Emergency Medicine Physician, Medical Director, and AI in Medicine Educator. He is the founder of Global MedOps Command (GMOC), a platform dedicated to equipping clinicians with the operational and technological tools they need to lead in modern medicine.
Dr. Shermer is the author of multiple clinical reference books for emergency providers:
•AI Casualty — Understanding and preparing for AI-driven disruption in clinical medicine
•The Observation Unit Playbook — Practical guidance for managing observation status patients in the ED
•ED Efficiency — Systems-level approaches to throughput, documentation, and department performance
Explore Dr. Shermer's full course catalog and clinical resources at the GMOC Kajabi Store.
Connect on LinkedIn for clinical AI commentary, leadership insights, and updates from the front lines of emergency medicine.
Sources
A Practical Guide to the Utilization of ChatGPT in the Emergency Department (Cureus, 2025)
ChatGPT With GPT-4 Outperforms Emergency Department Resident Physicians — JMIR (2024)
Prompt Engineering in Healthcare: Best Practices, Strategies & Trends (HealthTech Magazine, 2025)
ChatGPT Health under-triaged half of medical emergencies (NBC News / Nature Medicine, 2026)
Best Practices: Prompt Engineering for Medical Companies (T3 Consultants, 2026)