Building AI Fluency as a Professional Survival Skill

Mar 12, 2026By Chester Shermer

CS

There is a version of the future where AI tools are simply embedded in clinical practice and physicians use them the way they currently use ultrasound — competently, routinely, without deep engagement with the underlying technology. That future is coming, and it's probably 5 to 10 years out.

Between now and then, there is a window where the physicians who invest in genuine AI fluency — not just passive familiarity, but functional understanding and critical engagement — will establish professional advantages that compound over the next decade. This article is about how to use that window.

WHAT AI FLUENCY ACTUALLY MEANS FOR A CLINICIAN

AI fluency for a physician is not the ability to code machine learning models or explain transformer architecture. It is the ability to critically evaluate AI tools in clinical context — to understand what kind of algorithm is being used, what it was trained on, what its validated performance characteristics are, and where it is likely to fail in your patient population.

It means being able to ask the right questions of vendors, informatics teams, and system administrators: What was the training dataset? What is the sensitivity and specificity in a population like mine? What is the false negative rate for the diagnoses I most need to catch? What happens to the algorithm's performance when my patient falls outside its training distribution?

It means having a mental model of AI failure modes — not the exotic computer science failures, but the clinical ones. Anchoring to algorithm output. Over-reliance on automated documentation. Failure to recognize when a patient's presentation is genuinely outside the algorithm's training experience. These are the errors that will generate adverse outcomes and litigation in the AI era.

THE LEARNING CURVE IS FRONT-LOADED — INVEST NOW

The physicians who will be most capable in the AI-augmented clinical environment of 2030 are the ones building foundational knowledge now, before the tools are ubiquitous and the learning happens reactively. The learning curve for AI fluency is front-loaded: the foundational concepts that give you critical purchase on these tools take time to develop, but once internalized, they apply across the rapidly evolving technology landscape.

This means engaging with AI medical literature deliberately. Not just reading abstracts of AI studies, but understanding their methodology well enough to assess their validity. The AUC of an algorithm tells you almost nothing useful without understanding the cohort it was validated on and the prevalence of the outcome in that cohort. Physicians who can read an AI clinical validation study with the same critical facility they bring to an RCT are positioned to evaluate these tools appropriately when they arrive in their department.

The physician who waits to engage with AI until it lands on their computer screen will always be behind the technology. Build the conceptual foundation now, and the tools become easier to evaluate as they arrive.

TEACHING INSTITUTIONS AND THE AI GENERATION GAP

If you work in an academic environment, the AI competency gap between faculty and residents is already emerging. Medical students and residents training now have grown up with AI as a background feature of daily life. Many of them have intuitive facility with AI tools that exceeds their attendings'. The attendings who take this seriously will engage with that dynamic constructively — learning from trainees where appropriate, while providing the clinical judgment and professional responsibility framework that trainees need.

The attendings who don't engage will find themselves teaching AI-fluent residents with an authority gap in an increasingly AI-centric clinical environment. That is a professional development risk worth taking seriously.

LEADERSHIP POSITIONING IN THE AI TRANSITION

Every health system, EMS agency, and military medical unit is currently navigating AI adoption with varying degrees of intentionality. The physicians and medical directors who develop AI expertise are being pulled into governance, policy, and implementation leadership roles. This is where the professional leverage is highest — the ability to shape how AI tools are selected, validated, deployed, and monitored, rather than simply using what administrators and IT departments have decided to purchase.

Emergency medicine, EMS, and military medicine are all practice environments where the operational stakes of AI deployment are highest and the clinical judgment requirements are most demanding. The practitioners in these fields who engage seriously with AI now are positioned to be the voices that matter most in how it gets implemented.

The alternative is to be a passive recipient of technology decisions made by people without your clinical context. In a field where those decisions directly affect your patients and your professional liability, that is not an acceptable posture.

A PRACTICAL STARTING FRAMEWORK

Start with the AI tools currently deployed in your practice environment. Learn them at a level of depth that most of your colleagues haven't reached — not just how to use them, but how they work and where they've been validated. That alone will distinguish you.

Engage with the literature. The Journal of the American College of Emergency Physicians, Annals of Emergency Medicine, and Prehospital Emergency Care all carry AI-related content. The Lancet Digital Health and npj Digital Medicine are worth following for higher-level AI in medicine coverage. Read critically, not credulously.

Build relationships with your informatics team and your institution's clinical AI governance structure. If those structures don't exist, you're positioned to initiate that conversation.

The physician who leads their department's AI competency development, who sits on the AI governance committee, who trains residents in appropriate AI use — that physician is not being displaced by artificial intelligence. That physician is shaping what artificial intelligence means for their specialty.

That is the professional position worth building toward.

Dr. Chet's Take:

I've spent 25 years in emergency medicine watching technology transform practice—ultrasound, cardiac monitoring, telemedicine—and the pattern is always the same: the physicians who master the tool own the clinical space. AI is no different, except the window to establish that mastery is genuinely finite. I'm investing in AI fluency now not because I'm worried about being displaced, but because I'm running telehealth across rural Mississippi and directing HEMS operations where the decisions are consequential and fast. The attendings who understand what these tools can and cannot do will write the protocols; the ones who don't will follow them. That's the real professional leverage, and it's not being handed to you—you have to build it.