The GenAI and Expertise Paradox: Why It Makes Expert Work More Important But Harder

by | Thursday, February 13, 2025

I’ve had many conversations recently with colleagues about what happens when we integrate GenAI into our daily work. What effects does it have on our cognition? What do we gain and what do we lose in this process? Does using Claude or ChatGPT to help with writing atrophy certain parts of our brain – the parts having to do with crafting an argument, finding the right structure and words to convey your point, and perhaps most importantly, thinking deeply about what we want to say?

This is not a new debate. Socrates worried that writing would weaken memory and independent thought – and there’s some truth there. Similarly, numerous traditional skills and crafts have diminished or disappeared with the advent of digital technologies.

The standard counter-argument is that technology frees us from routine tasks to focus on higher-order thinking. Why memorize phone numbers when they’re always at hand? Having a calculator doesn’t mean we don’t need to understand mathematical concepts – it just means we can focus on understanding rather than computation.

Similar arguments are now being made about GenAI: that it will automate routine tasks, letting us focus on higher-level thinking. But this raises deeper questions. Given these technologies’ vast knowledge base and increasing ability to “reason,” is there still value in human expertise? Will experts need to expend less cognitive effort as AI takes over the “drudge work”?

As we’ll see, a new Microsoft Research paper (The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers) reveals something quite different and counterintuitive.

The researchers reached this conclusion through a study of 319 knowledge workers – ranging from computer programmers to business analysts, artists to educators – who use AI tools at least weekly in their jobs. This diverse sample of professions gives us a broad view of how AI is affecting different types of professional work.

Another striking finding?

The study revealed other significant shifts in how work gets done with AI: workers transition from gathering information to verifying it, from solving problems directly to integrating AI responses, and from executing tasks to stewarding them. Interestingly, people skip critical thinking for tasks deemed unimportant. The result is a complex dynamic: high task confidence enables effective AI delegation and stewardship, while lower self-confidence risks over-reliance on AI and reduced critical engagement.

Let’s focus on two key findings: experts putting in more cognitive effort, not less, when using GenAI,  and the inverse relationship between AI confidence and critical thinking.

For those familiar with expertise research, these findings both confirm and challenge what we know. For instance my colleagues at the Learning Engineering Institute recently published an article (Imundo et al., 2024: Expert thinking with generative chatbots) which explores some of the same themes. In the table below I have summarized some of the key ideas that we have learned from cognitive research on expertise (Column 1), what that suggests should happen when experts work with AI (Column 2) and compared it to what the Microsoft researchers actually found (column 3). The patterns are revealing – in some ways confirming decades of expertise research, while in others showing us something new about how expertise adapts to AI tools.


What Expertise Research Tells UsPredicted Impact on working with AIMicrosoft Study Findings
Multiple Theoretical Models
Experts maintain and switch between competing explanatory frameworksShould help experts detect when AI blends incompatible frameworks or makes unjustified theoretical leapsStrongly supported – Those with higher domain expertise were better at detecting framework inconsistencies in AI outputs
Schema Organization
Experts have deep knowledge organized in structured patternsShould enable quick detection of violations in AI outputs, even when subtly flawedSupported – But the increased cognitive effort reported suggests this isn’t as automatic as with traditional expertise tasks
Forward Reasoning
Experts work from principles to solutions rather than backward from goalsShould lead to better prompt construction and output evaluation based on principlesPartially supported – Experts did engage more critically but found it surprisingly effortful
Pattern Recognition
Experts see meaningful domain-specific chunksShould help efficiently separate AI-suitable tasks from those needing expert judgmentMixed evidence – Even experts struggled with consistent task allocation decisions
Domain Boundaries
Experts understand knowledge limitsShould help recognize when AI crosses into questionable territoryStrongly supported – Higher domain expertise correlated with better detection of AI overreach
Automatic Processing
Experts perform basic operations without conscious thoughtShould free up cognitive resources for strategic evaluationChallenge to traditional theory – AI verification seems to interfere with automatic processing, requiring more conscious effort

These findings reveal both expected and surprising patterns in how expertise shapes AI use—findings that both confirm and challenge our traditional understanding of expertise. In some respects, the expertise research of the past decades predicted exactly what we’re seeing: experts’ ability to maintain multiple theoretical frameworks helps them spot when AI makes invalid conceptual leaps, just as it helps them evaluate any other analytical tool. Their organized knowledge structures still serve them well in recognizing when AI outputs violate domain principles, even when these violations are subtly embedded in plausible-sounding text.

The most significant revelation, however, that while experts’ pattern recognition and domain boundary awareness help them navigate AI use, this navigation is surprisingly effortful. Traditional expertise research suggests that as skills become automatic, cognitive load decreases. Yet with AI, even experts report increased cognitive effort. This points to something new: the need to maintain dual awareness – of both domain principles and AI capabilities.

This leads us to a crucial insight: The real challenge isn’t just about expertise – it’s about the interaction between two different types of knowledge. Someone might be an expert in their field but not understand AI’s tendencies and limitations, or they might understand AI but lack the domain knowledge to properly evaluate its outputs. In today’s world, these can no longer be separate concerns. Just as expertise has always included understanding one’s epistemic tools – whether they’re statistical methods, measurement instruments, or research techniques – working with AI requires integrating it into our fundamental understanding of how knowledge work gets done.

Understanding AI Use Through Expertise: Four Scenarios

This interaction between domain expertise and AI knowledge creates four distinct scenarios that help us understand the challenges different groups face when working with AI tools:

The Novices Dilemma: Users with neither domain expertise nor AI knowledge are in the most vulnerable position—unable to evaluate AI’s outputs for accuracy and unaware of when and how AI might lead them astray. Like having a very convincing friend who’s often wrong, but lacking the knowledge to spot their mistakes. The Microsoft study suggests this group is particularly prone to overconfidence in AI, precisely because they lack the frameworks to spot its limitations.

The Expert’s Advantage: A domain expert who hasn’t yet developed deep understanding of AI tools starts from a workable position. Their domain knowledge lets them evaluate AI outputs for accuracy and theoretical soundness. They need to learn AI as a new tool, but this isn’t fundamentally different from how experts have always had to learn new research methods or analytical tools. Their expertise provides a foundation for this learning. The study confirms this – these experts engage more critically with AI outputs, even though they find it effortful.

The False Confidence Trap: Having AI knowledge without domain expertise creates a dangerous situation. Such users might know enough to worry about AI’s outputs but lack the knowledge to effectively evaluate or correct them. It’s like having a generic BS detector but no way to separate truth from fiction. This matches the study’s findings about the limitations of AI knowledge alone in ensuring quality outputs.

The Dual Expertise Challenge: The ideal combination – domain expertise plus AI knowledge – enables the most reliable outcomes but demands the most cognitive effort. These experts can evaluate both content and process, maintaining awareness of both domain principles and AI’s potential failure modes. The Microsoft study’s finding about increased cognitive effort makes perfect sense in this light: these experts are doing sophisticated dual-track evaluation that less qualified users might skip entirely.


This analysis raises particularly thorny issues about AI use in education. If expertise is a key prerequisite for effective AI use, then learners – who by definition are not experts – are in a particularly vulnerable position. They typically fall into our most challenging quadrant: novices in both the domain and in understanding AI’s capabilities and limitations. The Microsoft researchers’ suggested solutions—mainly exhorting users to “think more critically” or providing instructions about careful verification—seem inadequate. This is mainly because those who most need to hear it are least likely to read or follow it, while those who do read it may already be more sophisticated users.

The Microsoft study captures a crucial moment in how expertise adapts to new tools. Just as experts once had to learn to work with statistical methods or computational models, they now face the challenge of incorporating AI into their epistemic toolkit. But AI differs fundamentally from previous tools – its ability to mimic expertise makes verification both more crucial and more cognitively demanding than ever before.

In an educational context, the increased cognitive load of verifying AI outputs, even for expert teachers already pressed for time, may undermine the efficiency argument for using these tools in education. This added responsibility, on top of an already demanding workload, challenges the notion that AI will inherently make teaching more efficient.

The paradox is clear: rather than reducing the need for expertise, AI actually needs them more than ever, and makes greater demands from them, by making their work more cognitively challenging.

This finding should give us pause about the current push, particularly from industry voices, to rapidly integrate AI tools into education – where learners, lacking both domain and AI expertise, are doubly vulnerable.

A few randomly selected blog posts…

The Plays I Never Saw: A Tribute to Tom Stoppard

The Plays I Never Saw: A Tribute to Tom Stoppard

Tom Stoppard, the renowned playwright, has died. The funny thing is that I never saw any of his plays performed. And yet he played a critical role in making me who I am. The fact that I knew this playwright by reading his plays, rather than seeing them on stage, may...

Goodbye Malaysia, welcome Taiwan

So my stay in Malaysia comes to an end. I haven’t had either had time or internet access to be able to update the blog the last few days. So briefly here goes… The day after the presentation (the 13th) I had a meeting with Professors Ramayah, Rozinah, and Bala at USM...

London Underground Map

One of my favorite pieces of design is the London Underground Map. It has been replicated all over the world - from Mumbai to Tokyo. Leigh Wolf just sent me a link to a BBC 4 video made in 1987 about this map. Check it out here Here is a link to the Wikipedia page...

Creativity, TPACK and Trans-disciplinary Learning for the 21st Century

Over the past few years my scholarly focus has shifted into areas related to teacher creativity and transdisciplinary learning. I see this as being the next step in my research work. Though I have been thinking quite a bit about this, have applied to to my teaching...

Reimagining a College of Education at AACTE 2018

Reimagining a College of Education at AACTE 2018

I was recently in Baltimore for the 70th Annual Meeting of the American Association of Colleges of Teacher Education (AACTE), with a team from the Mary Lou Fulton Teachers College. We presented the work we are currently engaged in under the broad title...

The Page is a Stage: AI Debates as Academic Theater

The Page is a Stage: AI Debates as Academic Theater

Sometimes the best academic work emerges from moments of pure play. That’s certainly true for our recent paper “The Staging of AI: Exploring Perspectives About Generative AI, Creativity and Education,” which just appeared in the Journal of Interactive Media in...

Representing $$, two different ways

The power of serendipity... A few minutes ago I received a note via Facebook / Ken Dirkin providing a link to Where are your taxes going for 2010?. A few minutes later, via StumbleUpon, I came across this: The MasterCard Commercial I’d Like To See. Now each of these...

Friday the 13th

A design for Friday the 13th (shamelessly building on an original idea from Nikita Prokhorov)  Enjoy.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *