Cybersecurity Is the AI Risk Nobody in Your ED Owns

CS

Mar 23, 2026By Chester Shermer

In May 2025, the Interlock ransomware group hit Kettering Health—a 14-hospital system in Ohio—and shut down its Epic EHR for two weeks. Emergency departments stayed open, but staff reverted to pen and paper. Scheduled procedures were canceled. Patient data recorded manually during the outage had to be re-entered once systems came back online. The ransomware group later published stolen patient records on the dark web. In March 2026, Stryker—one of the largest medical device manufacturers in the world—disclosed a cybersecurity attack that caused a global disruption across its operations.

These are not outlier events. In 2024, 92% of healthcare organizations reported AI-related cyberattacks. The average cost of a healthcare data breach reached $10.9 million per incident. Over 276 million patient records were exposed—double the previous year. And the emergency department, with its always-on connectivity, time-critical decision-making, and growing dependence on AI-driven tools, sits squarely at the center of this threat landscape.

The Attack Surface You Didn't Budget For

Every AI tool integrated into your ED—sepsis prediction models, triage algorithms, clinical decision support systems, AI-assisted documentation—represents an additional attack surface. These systems depend on data pipelines that pull from electronic health records, laboratory feeds, vital sign monitors, and imaging archives. Each connection point is a potential entry vector for an adversary.

The threat is not hypothetical. Data poisoning attacks—where an adversary injects malicious data into a model's training pipeline—can corrupt clinical decision support without leaving obvious fingerprints. Researchers have demonstrated that attackers with access to as few as 100 to 500 samples can compromise healthcare AI systems with a success rate above 60%. Adversarial manipulation of medical images, lab values, or patient data can cause AI systems to produce inaccurate conclusions while appearing to function normally. The most dangerous part: these compromises often go undetected for six to twelve months or longer.

Most emergency physicians I work with have never considered this scenario. They think about ransomware as something that happens to the hospital's billing system or IT infrastructure—not to the clinical tools they use at the bedside. That assumption is outdated. The American Hospital Association's national cybersecurity advisor, John Riggi, has stated plainly: "The bad guys, once they're in the network, may deploy ransomware, which encrypts the pathways to medical devices—potentially the medical devices themselves—denying the availability of the device for clinicians and patients. That's where the real potential risk and harm is."

When the System Goes Dark, the ED Pays First

A ransomware attack does not discriminate by department, but its impact is felt most acutely in emergency medicine. When EHR systems go offline, the ED cannot wait for a scheduled downtime window. Patients continue to arrive. Ambulances continue to roll. The clinical decisions that require access to prior imaging, medication lists, allergy histories, and trending vital signs do not pause because a server is encrypted.

Seventy percent of healthcare organizations that experienced a cyberattack in 2024 reported direct interruptions in patient care delivery. That number should concern every emergency physician and medical director in the country. When a sepsis prediction algorithm goes offline during a night shift in a community ED that has come to rely on it, the cognitive load shifts entirely back to the physician—without warning, without a transition plan, and often without the redundant systems that should have been in place from the start.

I direct a telehealth program, a telemedicine network connecting nurse practitioners at remote and rural sites to emergency physician oversight. These sites depend on continuous connectivity. If ransomware takes down the network, those NPs lose access to real-time physician consultation at exactly the moment they need it most. The patients in those rural EDs—already underserved, already farther from definitive care—bear the consequence of a cybersecurity failure that originated in a server room hundreds of miles away.

AI-Controlled Medical Devices Are the Next Frontier

The FDA now requires all new medical device submissions to include evidence of cybersecurity protections—a rule that took effect in March 2023. But the vast majority of devices currently deployed in emergency departments were approved before that requirement existed. Infusion pumps, cardiac monitors, ventilators, and imaging equipment running on outdated operating systems remain connected to hospital networks with minimal security hardening.

AI is accelerating this problem. As more devices incorporate machine learning for dose calculation, alarm management, and diagnostic interpretation, the consequences of a compromised device escalate from data loss to direct patient harm. An attacker who can alter insulin pump dosing parameters, modify pacemaker signals, or tamper with AI-interpreted imaging results can cause injury that is difficult to distinguish from device malfunction or clinical error. ECRI flagged AI as the top health technology hazard for 2025—and cybersecurity vulnerabilities were a central reason.

The military has a term for this: a single point of failure in a mission-critical system. In combat medicine, we plan for degraded operations. We train for scenarios where communications go down, where supply chains are interrupted, where the system we planned around is no longer available. Emergency medicine needs to adopt the same mindset for AI-dependent clinical workflows.

What You Should Be Doing Now

First, know what AI tools are running in your ED and who owns them. Most departments cannot produce a complete inventory of the algorithms, clinical decision support systems, and AI-enabled devices that influence patient care. If you do not know what is running, you cannot plan for what happens when it stops running. This is not an IT question—it is a clinical governance question, and the medical director should be leading it.

Second, build and rehearse downtime protocols specifically for AI system failures. Your department almost certainly has downtime procedures for EHR outages. But do you have a specific plan for when the sepsis prediction model goes offline? When the AI triage tool stops returning scores? When the clinical documentation assistant is unavailable during a high-volume shift? These are not the same as a general EHR downtime, and treating them as such will leave gaps.

Third, demand transparency from your AI vendors about their cybersecurity posture. Ask about data encryption in transit and at rest, penetration testing frequency, incident response timelines, and what happens to the model's outputs if the data pipeline is compromised. If a vendor cannot answer these questions clearly, that should factor into your procurement decision. The Change Healthcare breach in February 2024 disrupted operations at 94% of hospitals that relied on its services. Vendor risk is your risk.

Fourth, push for cybersecurity to be treated as a patient safety issue at the institutional level—not an IT budget line item. The American Hospital Association has been driving this message for years, and it has never been more relevant. Every minute of ransomware downtime costs an estimated $9,000. But the real cost is measured in diverted ambulances, delayed treatments, and medication errors that multiply when clinicians lose access to the systems they rely on.

Dr. Chet's Take
I have spent my career in environments where system failures are not theoretical—they are operational realities you plan for. In military medicine, you assume your communications will be degraded. In air medical operations, you assume weather will ground the helicopter. In telehealth, you assume the network connection to a rural site will drop at the worst possible moment. You build contingencies. You train to them. You do not wait for the failure to discover that nobody owns the response plan.

Emergency medicine is adopting AI tools at a pace that outstrips our institutional readiness to protect them—and to function without them. That is not an argument against AI adoption. It is an argument for disciplined implementation with cybersecurity woven into the governance framework from the start, not bolted on after a breach forces the conversation. As a State Surgeon for the Army National Guard and medical director for programs that depend on continuous connectivity, I can tell you: the departments that will weather this storm are the ones treating cybersecurity as a clinical readiness issue, not a technology problem someone else owns.

— Dr. Chester "Chet" Shermer, MD, FACEP is a Professor of Emergency Medicine, Medical Director for Air Medical and Critical Care Transport programs, and a military medical commander with the Army National Guard. He is the founder of Global MedOps Command and the creator of AI in Emergency Medicine: Becoming AI Bulletproof.

AI Won't Wait. Neither Should You.

Cybersecurity is one of more than a dozen AI risk domains that emergency physicians need to understand—not at the IT level, but at the clinical governance level where decisions about patient care are made. If you are responsible for AI oversight in your department, or if you simply want to stop being blindsided by risks you did not know existed, consider enrolling in my course: AI in Emergency Medicine: Becoming AI Bulletproof.

Learn more at Global MedOps Command. My books on emergency department operations and AI preparedness are available at Gumroad and Kajabi.