ED Management
AI Change Management in the Emergency Department: A Leader’s Practical Guide
Why this matters
A practical AI change management guide for ED leaders. Use Kotter and ADKAR to drive physician adoption, reduce resistance, and roll out AI tools that actually work.
Recommended next step
Pair this article with the free guide or course store if you want a more structured framework you can apply at the bedside or in leadership conversations.
What this article covers

Author and clinical perspective
Chester "Chet" Shermer, MD, FACEP
Founder, Global MedOps Command
Dr. Chet Shermer leads Global MedOps Command to help emergency physicians, EMS teams, and operational medical leaders strengthen clinical judgment, adopt AI responsibly, and train for high-stakes decisions.

AI Change Management in the Emergency Department: A Leader's Practical Guide
Medically Reviewed By Chester "Chet" Shermer, MD, FACEP | Published May 4, 2026 | 9 min read
AI change management in the emergency department is the structured leadership work of moving a department from skepticism, fragmented adoption, or shadow AI use to coordinated, governed, and clinically effective integration. The single biggest predictor of successful ED AI rollout is not the tool itself. Roughly 80 percent of healthcare AI initiatives fail at the execution layer — adoption, workflow, and governance — not the technology. This guide gives ED leaders a practical, evidence-based playbook.
Why Change Management Is the Bottleneck for ED AI
Physicians are already adopting AI — often without their organizations. Recent national survey data show that 81 percent of physicians now use AI in practice, more than double the rate from three years earlier, with 92 percent reporting they want more formal training, particularly when AI tools are embedded in the EHR (The Fox Group, 2026). At the same time, only 16 percent of healthcare organizations have system-wide AI governance frameworks in place (Strativera, 2025).
That gap is the problem. Physicians are building personal AI tool stacks ahead of organizational policy, hospitals are buying enterprise AI without coordinated rollout, and the ED — the most operationally complex environment in the building — is often last to be addressed. The result is the "adoption paradox": widespread, fragmented use without governance, training, or measurable outcomes.
A 2024 systematic review in JMIR Human Factors identified the persistent barriers to clinical AI adoption: lack of trust in outputs, fear of professional identity erosion, insufficient AI literacy, inadequate workflow integration, and concerns about liability (JMIR Human Factors, 2024). All five are leadership problems, not technology problems.
- Cultural resistance is normal: Medicine has historically been slow to adopt new tools because patient safety concerns are legitimate. Treat resistance as a signal, not as obstruction.
- Burnout amplifies resistance: Asking exhausted clinicians to learn a new tool without first showing them the burden it removes is a guaranteed failure mode.
- The ED is uniquely high-stakes: Multi-patient context, interruption-driven workflow, and the medico-legal weight of every documented decision raise the bar for change leadership.
What ED Leaders Actually Need to Manage
ED AI change management is not a single project. It is the combined leadership of three workstreams happening at once:
- Technical change: EHR integration, vendor onboarding, system configuration, and downtime procedures.
- Workflow change: New documentation steps, new alerts, new handoffs, new consent scripts.
- Behavioral change: Trust in the tool, willingness to use it, calibration of when to override, and reinforcement of new habits.
Most rollouts fail because leadership treats only the first one as their job and assumes the other two will sort themselves out.
Two Frameworks Every ED Leader Should Know
The two most widely used change management frameworks in healthcare AI deployments are Kotter's 8-Step Process and the Prosci ADKAR model (Interreg Baltic, 2025). They complement each other.
Kotter's 8 Steps — the organizational lens
Kotter is top-down. It describes how leaders move a whole organization through change.
- Create urgency: Anchor the rollout to a real ED pain point — documentation burden, sepsis miss rate, throughput, burnout.
- Build a guiding coalition: A small, high-credibility group of ED physicians, nurses, informatics, IT-security, and a CMIO sponsor.
- Form a strategic vision: A written, two-paragraph statement of what success looks like in 12 months. Specific. Measurable.
- Enlist a volunteer army: Move beyond the coalition to a critical mass of clinicians who will use, troubleshoot, and advocate.
- Remove barriers: Fix the EHR integration, the consent workflow, the after-hours support, the downtime plan. Do not ask physicians to absorb friction.
- Generate short-term wins: Publish 30-day pilot results. A 28 percent reduction in documentation time is a story that recruits the next ten adopters.
- Sustain acceleration: Do not declare victory after the pilot. Roll the same playbook into the next tool.
- Institute change: Bake the new practice into onboarding, training, performance review, and policy.
ADKAR — the individual lens
ADKAR is bottom-up. It describes what every individual clinician needs in order to adopt the change (And Change, 2025).
- Awareness of why the change is happening — in plain ED-relevant language.
- Desire to participate, driven by clear personal benefit (less time on the keyboard, less after-hours charting).
- Knowledge of how to use the tool — delivered as ED-specific training, not generic outpatient demos.
- Ability to use it competently in real shifts, with mentorship and at-the-elbow support during go-live.
- Reinforcement — recognition, ongoing feedback loops, and integration into normal departmental workflow so the change does not regress.
The two frameworks together cover both sides: Kotter tells you how to move the department, ADKAR tells you how to move each clinician.
A Step-by-Step ED AI Change Management Playbook
Step 1 — Pick a problem worth solving
Do not start with "we need to deploy AI." Start with "documentation is consuming 50 percent of every shift" or "we are missing high-risk sepsis cases at triage." Tie the rollout to a measurable pain point, with a baseline number, before you select a vendor.
Step 2 — Build a small guiding coalition
Five to eight people: an ED medical director, an ED nurse leader, a physician champion who actually carries shifts, an informatics lead, an IT-security representative, a compliance officer, and ideally a resident or APP. No more. Bigger committees move slower and own less.
Step 3 — Define success in writing, with numbers
Pick three metrics: one operational (documentation time, time-to-disposition, throughput), one clinical (note quality, miss rate, alert fatigue), and one human (burnout score, satisfaction, retention). Capture baseline. You cannot manage what you have not measured.
Step 4 — Engage physician champions early
Real-world deployments that succeed all share this pattern. Henry Ford Health rolled out RapidAI for stroke by first enlisting champions among its own stroke neurologists — they tested the tool, troubleshot, advocated internally, and brought the rest of the department along, ultimately reducing median door-to-puncture time by about 40 percent (American Medical Association, 2025). For the underlying selection methodology, see How to Evaluate an AI Tool Before Using It in Your Emergency Department.
Step 5 — Communicate transparently and repeatedly
Most physicians do not resist change. They resist surprise. Publish the rollout plan, the pilot timeline, the data being collected, and the decision criteria for go-live. Repeat it at every section meeting until people are bored of hearing it. That is when the message has actually landed.
Step 6 — Train for the actual ED environment
Generic vendor training videos will not cut it. Build a 30-minute orientation that uses real ED scenarios — a trauma activation interrupting a stroke workup, a multi-patient hallway, a downtime drill. Include the consent script. Include what to do when the tool fails.
Step 7 — Run a structured pilot, then decide
Five to ten physicians for 30 to 60 days. Pre/post measurement on the same instruments. The decision rule must be set in advance: which thresholds trigger broader rollout, which trigger redesign, which trigger termination. Without a pre-set decision rule, pilots become permanent.
Step 8 — Scale only after the pilot meets pre-defined thresholds
If documentation time dropped, burnout indicators improved, and note quality met liability and billing standards, scale. If any metric failed, fix the failure first. Do not scale a broken pilot.
Step 9 — Reinforce, measure, and iterate
The rollout is not over at go-live. Set a 90-day post-launch review and a 12-month review. Track whether usage is sustained, whether note quality holds, and whether the original pain point actually moved. If usage drops off, that is the data telling you the workflow needs another iteration.
Common Mistakes ED Leaders Make — and How to Avoid Them
- Mistake 1: Treating AI rollout as an IT project. Correct approach: It is a clinical leadership project with IT support. The medical director owns it.
- Mistake 2: Mandating use from day one. Correct approach: Real-world data show physicians appropriately self-select AI tools for the right encounters. Mandates degrade trust without improving outcomes.
- Mistake 3: Ignoring shadow AI. Correct approach: Physicians are already using personal AI tools. Pretending otherwise is a governance failure. Acknowledge it, set policy, and provide a sanctioned alternative.
- Mistake 4: Skipping the burnout conversation. Correct approach: Lead with what the tool removes from the clinician's plate, not with what it adds. Reducing administrative burden through automation is the single biggest opportunity physicians themselves identify (ICTworks, 2025).
- Mistake 5: Declaring victory after the pilot. Correct approach: Sustained adoption requires reinforcement. Build it into onboarding, performance review, and ongoing measurement.
Dr. Chet's Take
The biggest leadership mistake I have seen with AI rollouts is leaders falling in love with the tool before they fall in love with the problem. If you cannot describe, in one sentence, what gets better for the patient or the physician on shift, you are not ready to roll anything out. The technology is the easy part now. Vendors will sell you a polished product. The hard part is the human change work — explaining, training, listening, removing friction, and reinforcing the right behavior week after week.
What I tell other medical directors is this: AI change management is just clinical leadership applied to a new tool. The same patience, the same coalition-building, the same insistence on measurement, the same willingness to own a failed pilot — that is what gets a department from "we tried it once" to "we cannot imagine practicing without it." There are no shortcuts. The departments that try to skip the people work end up with expensive software no one uses.
— Chester Shermer, MD, FACEP | Emergency Medicine, 25+ Years Clinical Experience
Key Takeaways
- Roughly 80 percent of healthcare AI initiatives fail at the execution layer — adoption, workflow, and governance — not the technology.
- Physicians are adopting AI faster than organizations are governing it: 81 percent of physicians now use AI in practice; only 16 percent of healthcare organizations have system-wide AI governance.
- Kotter's 8 Steps gives leaders the organizational playbook; ADKAR gives the individual-level playbook. Use both.
- Real-world success patterns are consistent: physician champions, transparent communication, ED-specific training, structured pilots, pre-defined decision rules, and post-launch reinforcement.
- The medical director owns the change, not IT. Treat AI rollout as clinical leadership applied to a new tool.
Frequently Asked Questions
Q: What is AI change management in the emergency department?
A: It is the structured leadership work of moving an ED through the technical, workflow, and behavioral changes required to adopt and sustain a new AI tool. It includes governance, physician engagement, training, pilot design, measurement, and post-launch reinforcement — not just installation.
Q: Which change management framework works best for healthcare AI?
A: There is no single best framework. The two most widely used in healthcare are Kotter's 8 Steps (organizational) and the Prosci ADKAR model (individual). Most successful AI rollouts use both. Survey data show that more than half of European hospitals do not use any formal change management framework — which correlates strongly with implementation failure.
Q: How long does an ED AI rollout typically take?
A: AMA data suggest the average hospital takes about 23 months to go from identifying a digital health need to scaling a solution. A focused, well-led ED rollout can run a 30-to-60-day pilot followed by a 60-to-90-day department scale-up, but this requires baseline measurement, pre-set decision rules, and dedicated physician champion time.
Q: Why do most healthcare AI rollouts fail?
A: Most fail at the execution layer rather than the technology layer. Common patterns include skipping baseline measurement, treating rollout as an IT project, ignoring shadow AI use, mandating universal adoption from day one, and not reinforcing the change after go-live. These are leadership failures, not vendor failures.
Q: How do I get skeptical ED physicians to adopt a new AI tool?
A: Lead with what the tool removes, not what it adds. Recruit two or three credible physician champions early. Run a small pilot with transparent baseline metrics. Publish the results. Allow physicians to self-select use cases, especially in the first 60 days. Invest in ED-specific training rather than generic vendor demos. Reinforce wins publicly and address concerns directly.
Q: What metrics should I track during an ED AI rollout?
A: Track three categories. Operational: documentation time, time-to-disposition, throughput, alert volume. Clinical: note quality, accuracy of AI output, override rate, miss rate. Human: burnout, satisfaction, retention, training completion. Establish baseline before pilot, then measure at 30, 60, and 90 days post-launch.
Medical Disclaimer
This content is intended for licensed medical professionals, EMS personnel, and trained emergency responders. It does not constitute personalized medical advice. Clinical and operational protocols referenced are for educational purposes and should be adapted to your jurisdiction's scope of practice, institutional policy, and applicable medical direction.
Continue Your Training
Ready to put this into practice?
If you're an emergency physician trying to understand how AI will actually impact your clinical practice — not just the hype — I put together a short, practical guide. You can download the AI in EM Survival Guide here.
Dr. Shermer's structured training programs are built on the same operational and leadership principles covered in this article — designed for emergency physicians and ED leaders who need to make AI deployments work in the real world.
Browse All Courses at Global MedOps Command
Relevant Reading for This Topic:
- How to Avoid Becoming an AI Casualty — Dr. Shermer's guide to navigating AI tools in clinical and operational settings without compromising judgment or patient outcomes.
- Emergency Department Efficiency Playbook — Practical systems for throughput, triage optimization, and operational efficiency drawn from 25+ years in high-volume emergency departments.
Related Reading on Global MedOps Command:
- Physician Leadership Through the AI Transition: The Emergency Medicine Leader's Guide
- AI Workflow Integration in the Emergency Department: The Operational Playbook
- How to Evaluate an AI Tool Before Using It in Your Emergency Department
- How to Implement an AI Ambient Scribe in Your Emergency Department
- AI Literacy in Emergency Medicine: What Every EM Physician Needs to Know
Connect with Dr. Shermer: LinkedIn — Chester "Chet" Shermer, MD, FACEP
References
- The Fox Group. Physician AI Adoption Is Rising: What to Do Next. April 2026. foxgrp.com
- Strategy and Implementation Frameworks for Healthcare AI Transformation. Strativera. October 2025. strativera.com
- Barriers to and Facilitators of Artificial Intelligence Adoption in Healthcare: Scoping Review. JMIR Human Factors. 2024. humanfactors.jmir.org
- The Importance of Change Management in Introducing AI-Based Diagnostics: CAIDX Implementation Guide. Interreg Baltic Sea Region. February 2025. interreg-baltic.eu
- Change Readiness in Healthcare's Digital Transformation. And Change. September 2025. andchange.com
- How AI is Helping Improve Stroke Outcomes at Henry Ford Health. American Medical Association. August 2025. ama-assn.org
- AMA Digital Health Implementation Playbook Series. American Medical Association. 2025. ama-assn.org
- Overcoming AI Resistance in Digital Health. ICTworks. June 2025. ictworks.org
Published by Global MedOps Command | globalmedopscommand.com "Prepared for Every Emergency. Educated for Every Challenge."
Do not stop at awareness
Turn this article into a concrete next step while the issue is still fresh.
Move from reading into a practical next step if this issue already affects your workflow, department, or training decisions.

Author and expertise
Chester "Chet" Shermer, MD, FACEP
Founder, Global MedOps Command
Dr. Chet Shermer leads Global MedOps Command to help emergency physicians, EMS teams, and operational medical leaders strengthen clinical judgment, adopt AI responsibly, and train for high-stakes decisions.
Through courses, simulation platforms, books, and practical resources, he translates frontline emergency medicine, transport, and military leadership experience into tools clinicians can use immediately.
This article is published through Global MedOps Command to help emergency clinicians evaluate AI, workflow, and operational decisions with a physician-led perspective.
View the full author hubClinical application depth
Evidence-aware AI adoption still depends on clinician judgment, local validation, and operational context.
Even when a topic looks persuasive on first read, the practical work begins when physicians translate it into local policy, escalation thresholds, training expectations, and failure-mode review. That is where credibility is gained or lost.
What to pressure-test next
Questions for the next leadership discussion
Build the next step from this article
Strengthen topical depth, related reading, and the right conversion path.
Keep readers inside the same topic cluster with related articles, then channel them toward the guide, course, books, simulation, or contact path that best matches the problem this article surfaced.
Course
Move from insight into a repeatable framework
Use the flagship course when you want a structured way to evaluate AI tools, pressure-test claims, and protect clinical judgment.
See the AI course detailsSimulation
Practice the decision path under pressure
Use EM-Sim when you want scenario-based repetition that turns article-level insight into physician-facing emergency-medicine reps.
Explore EM-SimGuide
Take the quickest next step
Use the free survival guide when you want the shortest path from this article into a practical emergency-medicine AI overview.
Get the Free GuideRelated reading inside Global MedOps Command
AI Risk & Governance
AI Bias in Pain Management Is an ED Problem
A 2024 study from Beth Israel Deaconess Medical Center found that AI chatbots—including GPT-4 and Google's Gemini Pro—consistently underassessed pain in Bl
Read related articleAI in Emergency Medicine
AI in Emergency Medicine: Your Triage Is Already Obsolete
Discover how AI is transforming emergency medicine triage. Dr. Shermer shares 25 years of EM insights on becoming AI bulletproof. Start today. It is 0300
Read related articleAI in Emergency Medicine
AI Literacy in Emergency Medicine: What You Need to Know
Emergency physicians using AI tools without understa
Read related article