What is the relationship between AI and human creativity?
Will AI supercharge human innovation, amplifying our ability to discover and invent? Or will it replace human ingenuity altogether? Or are we entering a hybrid future where humans and AI combine in unexpected ways (something I have experimented with and written about here and here).
While generative AI is a pretty new technology, so any claims made today need to be taken with a grain of salt. That said, a recent study on scientific discovery provides some surprising insights that can inform our thinking. More importantly, the findings of this study may have some significant implications for us in education as we seek to see how AI can be incorporated into teaching and learning.
In this study, (Artificial Intelligence, Scientific Discovery, and Product Innovation), Aidan Toner-Rodgers examined the impact of an AI materials discovery tool—designed to identify novel chemical compounds—deployed across a major U.S. company’s research lab of over 1,000 scientists. The results demonstrated remarkable productivity gains: scientists identified 44% more new materials, filed 39% more patents, and increased new product development by 17%. Most notably, the AI automated up to 57% of the creative work involved in materials discovery, while producing more innovative and unique materials compared to the lab’s pre-AI discoveries. All great news. Right?
However, a deeper dive into the data reveals a more complicated picture – namely that these benefits weren’t shared equally. The top scientists saw their output nearly double while the bottom third barely improved at all. What the top scientists excelled at was identifying promising AI suggestions, while less experienced scientists often wasted time testing dead ends. In other words, the good scientists became better, increasing the gap between them and their less experienced peers.
A parallel shift was equally significant: the role of the scientists changed, shifting from coming up with new ideas to focusing on evaluating suggestions generated by AI.)
This is all very interesting, but the implications for education run deeper than they might first appear. While the study focused on scientific discovery rather than learning, its findings illuminate something crucial about expertise and AI: how our existing knowledge shapes our ability to work creatively and productively with these new tools. The data revealed a stark pattern: scientists who already possessed strong expertise became even more effective, while others saw more modest gains.
For anybody who cares about educational equity, this widening performance gap should set off alarm bells. We’re witnessing education’s oldest story—the Matthew Effect, where the rich get richer—playing out in new and potentially more concerning ways with AI.”
The Matthew Effect is a term coined by sociologist Robert Merton in 1968, based on a quote from the Gospel book of Matthew in the Bible, verse 25:29:
For whosoever hath, to him shall be given, and he shall have more abundance: but whosoever hath not, from him shall be taken away even that he hath.
In other words, the rich get richer, and the poor get poorer.
Educators are quite aware of this pernicious effect, since it shapes education in countless ways, creating cycles where early advantages compound over time. The classic example comes from Keith Stanovich’s 1986 research on reading, where he showed how good readers get exponentially better: they read more, build vocabulary faster, and understand more, leading them to read even more. Similar patterns emerge in mathematics, where studies show that kindergarten math skills predict not just later math achievement, but also reading ability and high school graduation rates. Clearly, this compounding cycle of advantage can have real consequences.
Here’s the crucial finding from MIT that connects these pieces together. Success with AI wasn’t about technical skill. It was about expertise and judgment. We’re racing to integrate AI into classrooms, and it is often pitched as being a great equalizer. As Andrej Karpathy, formerly of OpenAI and Tesla, argues, with AI ‘it will be easy for anyone to learn anything, expanding education in both reach and extent.’ The visionary Sal Khan of Khan Academy proclaims we’re ‘We’re at the cusp of using AI for probably the biggest positive transformation that education has ever seen. And the way we’re going to do that is by giving every student on the planet an artificially intelligent but amazing personal tutor.’ Steve Pedian of Bitcoin envisions, “… a world where every student has access to a personal tutor, where learning is tailored to individual needs, and where quality education is no longer a privilege but a universal right. This isn’t a distant dream—it’s the promise of AI in education.” Asyia Kazmi of the Gates Foundation suggests AI can address our ‘dramatic learning equity gaps,’ making it ‘not just revolutionising education – we’re democratising it on a global scale!’
But will AI truly be the great equalizer these leaders envision? The evidence from this study suggests that it may not.
It may be that the very learners who need the most support might get the least benefit from AI tools. Just as we saw with the scientists, students who already grasp core concepts will likely use AI to accelerate their learning, ask deeper questions, and push their understanding further—while those struggling with basics may find themselves falling even further behind. This creates a troubling paradox. Students need expertise to effectively use AI. But they’re supposed to be using AI to develop this expertise. It’s a chicken-and-egg problem with potentially serious consequences for learning.
AI will help those who already have some knowledge of the domain more than those who don’t. And in a world where some students come in with built in advantages and privileges, AI will just perpetuate and enhance and entrench them further. Moreover, as I have argued in a post on sycophancy, conversations with generative AI (for a range of reasons having to do with their variability and being programmed to please) can suffer from conversational drift – where the AI goes down a wrong path, just to appear affable and helpful. A student with a weak understanding of the domain (which is essentially what a learner is) will not have the requisite foundational knowledge to judge the outputs of AI.
Thus students who know a bit more about the subject will learn more and faster, further separating them from those with weaker foundations. And this gap will only grow.
For now, at least, the message is clear: AI doesn’t reduce the importance of building strong foundational knowledge. It amplifies it.
Note: Interestingly, the study revealed another concerning finding: 82% of scientists reported decreased job satisfaction despite increased productivity. They felt their expertise was underutilized and their work became less creative. What this means for student engagement and learning deserves its own discussion – stay tuned for “AI’nt Fun: Why Smarter Tools Might Make Learning Duller.”
0 Comments