I’ve had many conversations recently with colleagues about what happens when we integrate GenAI into our daily work. What effects does it have on our cognition? What do we gain and what do we lose in this process? Does using Claude or ChatGPT to help with writing atrophy certain parts of our brain – the parts having to do with crafting an argument, finding the right structure and words to convey your point, and perhaps most importantly, thinking deeply about what we want to say?
This is not a new debate. Socrates worried that writing would weaken memory and independent thought – and there’s some truth there. Similarly, numerous traditional skills and crafts have diminished or disappeared with the advent of digital technologies.
The standard counter-argument is that technology frees us from routine tasks to focus on higher-order thinking. Why memorize phone numbers when they’re always at hand? Having a calculator doesn’t mean we don’t need to understand mathematical concepts – it just means we can focus on understanding rather than computation.
Similar arguments are now being made about GenAI: that it will automate routine tasks, letting us focus on higher-level thinking. But this raises deeper questions. Given these technologies’ vast knowledge base and increasing ability to “reason,” is there still value in human expertise? Will experts need to expend less cognitive effort as AI takes over the “drudge work”?
As we’ll see, a new Microsoft Research paper (The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers) reveals something quite different and counterintuitive.
The more expertise you have, the more cognitive effort you’ll spend when working with AI. Even more intriguingly, this extra effort seems to be exactly what makes expertise more crucial, not less, in the age of AI.
The researchers reached this conclusion through a study of 319 knowledge workers – ranging from computer programmers to business analysts, artists to educators – who use AI tools at least weekly in their jobs. This diverse sample of professions gives us a broad view of how AI is affecting different types of professional work.
Another striking finding?
The more confident people are in AI’s abilities, the less critically they think about its outputs. Yet those who are confident in their own expertise think more critically – even though they find it more effortful.
The study revealed other significant shifts in how work gets done with AI: workers transition from gathering information to verifying it, from solving problems directly to integrating AI responses, and from executing tasks to stewarding them. Interestingly, people skip critical thinking for tasks deemed unimportant. The result is a complex dynamic: high task confidence enables effective AI delegation and stewardship, while lower self-confidence risks over-reliance on AI and reduced critical engagement.
Let’s focus on two key findings: experts putting in more cognitive effort, not less, when using GenAI, and the inverse relationship between AI confidence and critical thinking.
For those familiar with expertise research, these findings both confirm and challenge what we know. For instance my colleagues at the Learning Engineering Institute recently published an article (Imundo et al., 2024: Expert thinking with generative chatbots) which explores some of the same themes. In the table below I have summarized some of the key ideas that we have learned from cognitive research on expertise (Column 1), what that suggests should happen when experts work with AI (Column 2) and compared it to what the Microsoft researchers actually found (column 3). The patterns are revealing – in some ways confirming decades of expertise research, while in others showing us something new about how expertise adapts to AI tools.
Expertise Research and Working with GenAI: Predictions vs Findings
What Expertise Research Tells Us | Predicted Impact on working with AI | Microsoft Study Findings |
Multiple Theoretical Models | ||
Experts maintain and switch between competing explanatory frameworks | Should help experts detect when AI blends incompatible frameworks or makes unjustified theoretical leaps | Strongly supported – Those with higher domain expertise were better at detecting framework inconsistencies in AI outputs |
Schema Organization | ||
Experts have deep knowledge organized in structured patterns | Should enable quick detection of violations in AI outputs, even when subtly flawed | Supported – But the increased cognitive effort reported suggests this isn’t as automatic as with traditional expertise tasks |
Forward Reasoning | ||
Experts work from principles to solutions rather than backward from goals | Should lead to better prompt construction and output evaluation based on principles | Partially supported – Experts did engage more critically but found it surprisingly effortful |
Pattern Recognition | ||
Experts see meaningful domain-specific chunks | Should help efficiently separate AI-suitable tasks from those needing expert judgment | Mixed evidence – Even experts struggled with consistent task allocation decisions |
Domain Boundaries | ||
Experts understand knowledge limits | Should help recognize when AI crosses into questionable territory | Strongly supported – Higher domain expertise correlated with better detection of AI overreach |
Automatic Processing | ||
Experts perform basic operations without conscious thought | Should free up cognitive resources for strategic evaluation | Challenge to traditional theory – AI verification seems to interfere with automatic processing, requiring more conscious effort |
These findings reveal both expected and surprising patterns in how expertise shapes AI use—findings that both confirm and challenge our traditional understanding of expertise. In some respects, the expertise research of the past decades predicted exactly what we’re seeing: experts’ ability to maintain multiple theoretical frameworks helps them spot when AI makes invalid conceptual leaps, just as it helps them evaluate any other analytical tool. Their organized knowledge structures still serve them well in recognizing when AI outputs violate domain principles, even when these violations are subtly embedded in plausible-sounding text.
The most significant revelation, however, that while experts’ pattern recognition and domain boundary awareness help them navigate AI use, this navigation is surprisingly effortful. Traditional expertise research suggests that as skills become automatic, cognitive load decreases. Yet with AI, even experts report increased cognitive effort. This points to something new: the need to maintain dual awareness – of both domain principles and AI capabilities.
This leads us to a crucial insight: The real challenge isn’t just about expertise – it’s about the interaction between two different types of knowledge. Someone might be an expert in their field but not understand AI’s tendencies and limitations, or they might understand AI but lack the domain knowledge to properly evaluate its outputs. In today’s world, these can no longer be separate concerns. Just as expertise has always included understanding one’s epistemic tools – whether they’re statistical methods, measurement instruments, or research techniques – working with AI requires integrating it into our fundamental understanding of how knowledge work gets done.
Understanding AI Use Through Expertise: Four Scenarios
This interaction between domain expertise and AI knowledge creates four distinct scenarios that help us understand the challenges different groups face when working with AI tools:

The Novices Dilemma: Users with neither domain expertise nor AI knowledge are in the most vulnerable position—unable to evaluate AI’s outputs for accuracy and unaware of when and how AI might lead them astray. Like having a very convincing friend who’s often wrong, but lacking the knowledge to spot their mistakes. The Microsoft study suggests this group is particularly prone to overconfidence in AI, precisely because they lack the frameworks to spot its limitations.
The Expert’s Advantage: A domain expert who hasn’t yet developed deep understanding of AI tools starts from a workable position. Their domain knowledge lets them evaluate AI outputs for accuracy and theoretical soundness. They need to learn AI as a new tool, but this isn’t fundamentally different from how experts have always had to learn new research methods or analytical tools. Their expertise provides a foundation for this learning. The study confirms this – these experts engage more critically with AI outputs, even though they find it effortful.
The False Confidence Trap: Having AI knowledge without domain expertise creates a dangerous situation. Such users might know enough to worry about AI’s outputs but lack the knowledge to effectively evaluate or correct them. It’s like having a generic BS detector but no way to separate truth from fiction. This matches the study’s findings about the limitations of AI knowledge alone in ensuring quality outputs.
The Dual Expertise Challenge: The ideal combination – domain expertise plus AI knowledge – enables the most reliable outcomes but demands the most cognitive effort. These experts can evaluate both content and process, maintaining awareness of both domain principles and AI’s potential failure modes. The Microsoft study’s finding about increased cognitive effort makes perfect sense in this light: these experts are doing sophisticated dual-track evaluation that less qualified users might skip entirely.
This analysis raises particularly thorny issues about AI use in education. If expertise is a key prerequisite for effective AI use, then learners – who by definition are not experts – are in a particularly vulnerable position. They typically fall into our most challenging quadrant: novices in both the domain and in understanding AI’s capabilities and limitations. The Microsoft researchers’ suggested solutions—mainly exhorting users to “think more critically” or providing instructions about careful verification—seem inadequate. This is mainly because those who most need to hear it are least likely to read or follow it, while those who do read it may already be more sophisticated users.
The Microsoft study captures a crucial moment in how expertise adapts to new tools. Just as experts once had to learn to work with statistical methods or computational models, they now face the challenge of incorporating AI into their epistemic toolkit. But AI differs fundamentally from previous tools – its ability to mimic expertise makes verification both more crucial and more cognitively demanding than ever before.
In an educational context, the increased cognitive load of verifying AI outputs, even for expert teachers already pressed for time, may undermine the efficiency argument for using these tools in education. This added responsibility, on top of an already demanding workload, challenges the notion that AI will inherently make teaching more efficient.
The paradox is clear: rather than reducing the need for expertise, AI actually needs them more than ever, and makes greater demands from them, by making their work more cognitively challenging.
This finding should give us pause about the current push, particularly from industry voices, to rapidly integrate AI tools into education – where learners, lacking both domain and AI expertise, are doubly vulnerable.
0 Comments