The Great Deskilling Dilemma
The rapid adoption of AI tools is changing both medical practice and education. Dr Amiirah Nuckchady calls for a pedagogical revolution in response.

If someone had told me when I first qualified in 2005 that AI would change the face of clinical practice as we know it, I would have laughed and said, “Don’t worry about the robots, they’ll never replace doctors!”
Whilst that hasn’t quite happened yet, no one can deny that clinical practice is changing rapidly and that AI tools, including generative AI, are having a profound impact on both medical practice and education.
The rapid explosion of the use of generative AI (e.g. chat models such as Google Gemini 2.5, o3 or Claude 4, which are distinct from traditional hard-coded systems like NHS Decision Support Tools) has left the medical profession facing a troubling paradox. The medical regulator, the General Medical Council (GMC), states that doctors must exercise professional judgment and work within their competence: a fundamental principle of medical practice. Yet this judgment isn't innate; it develops through countless patient encounters, with expertise honed through pattern recognition.
However, if AI increasingly handles initial clinical interpretations, we risk creating a generation of practitioners who lack the very experience needed to supervise these tools effectively.
This is what I’m calling “the great deskilling dilemma”.
This phenomenon is not unique to medicine and extends throughout higher education. How can professionals adequately oversee AI systems if overreliance on those tools has prevented such professionals from developing the expertise to recognise when something's wrong?
The deskilling dilemma represents a vicious cycle. Students naturally gravitate toward efficiency; why struggle through complex problem-solving when AI can provide instant answers? But each bypassed struggle represents a missed opportunity for cognitive development. The same critical analysis that AI can forestall is what builds expertise.
And it’s that same hard-won expertise that is needed to ensure professional judgement can be used to safely review AI output.
In medical education, if AI pre-diagnoses patient cases, students miss the critical thinking process of clinical reasoning. They never experience the uncertainty, the methodical elimination of possibilities, or the "aha" moments when subtle patterns suddenly make sense. Similarly, in humanities programs, students using AI to analyse texts (let alone write essays!) forfeit the deep reading skills that come from grappling with ambiguity and constructing meaning independently.
So, what, if anything, can we do about this? For me, the solution lies not in avoiding AI but in fundamentally restructuring how we integrate it into higher education. The goal must be developing students who can critically appraise AI and reflect on outputs rather than passively accept them.
The deskilling dilemma demands a pedagogical revolution. I think that’s no exaggeration. We must teach students to view AI as a sophisticated tool requiring expert supervision, not an oracle delivering absolute truth. This means preserving the struggles that build expertise, while thoughtfully integrating AI to enhance - not replace - human judgment.
The stakes are high. Professional judgment, whether medical, educational, or otherwise, requires more than accessing correct answers. It demands the deep pattern recognition, intuitive grasp, and critical thinking that emerge only through repeated, effortful engagement with complexity. As AI capabilities expand, our commitment to developing human capacities must intensify proportionally. Only then can the next generation of professionals have a chance of possessing the expertise necessary to supervise the tools designed to assist them.
Dr Amiirah Nuckchady
Senior Lecturer in Clinical and Professional Skills and Queen Mary Academy Fellow
https://www.qmul.ac.uk/ihse/staff/institute-of-health-sciences-education/amirahnuckchady.html