In a recent episode of Silver Lining for Learning on Hybrid Intelligence, I was going on about how AI systems are becoming increasingly sophisticated at mimicking human emotion and agency. Nothing new for readers of this blog – my usual concerns about synthetic relationships and how we might be biologically predisposed to respond to these systems as if they were conscious beings, even when we know better.
That’s when one my co-hosts, Chris Dede, dropped an interesting literary reference in the chat, saying that our discussion reminded him of Lord Alfred Tennyson famous poem: “The Lady of Shalott.”
I hadn’t thought about this poem in years.
Curious, to better understand what connection Chris’ had made in his mind, I went back to read the poem again again. For those who haven’t read the poem, it tells the story of a woman cursed to view the world only through mirror reflections, weaving what she sees into tapestries:
And moving thro’ a mirror clear
That hangs before her all the year,
Shadows of the world appear
Reading it again, I found myself wondering what connections Chris had seen. Was he thinking about how the Lady’s mediated experience parallels our relationship with AI? Or perhaps something deeper about the nature of synthetic relationships themselves?
The questions kept multiplying. What does it mean that the Lady, confined to her tower, can only view reality through reflections? Was Chris suggesting a parallel to AI systems, processing the world only through their training data, weaving responses without true understanding or consciousness?
And what about the Lady’s yearning for connection, her fatal attraction to Lancelot? Could this mirror our growing attachment to AI companions? Are we not all, in some way, forming bonds with chatbots and virtual friends, knowing these connections are artificial yet drawn to them anyway? Perhaps even finding unexpected joy in these interactions?
Most unsettling was the question about cognitive entrapment. Does the Lady’s isolation in her tower, seeing only reflections, warn us about our own future? Are we already caught in a web of artificial relationships, gradually losing touch with direct human experience? Like the Lady, are we weaving beautiful tapestries of artificial connection while real life passes by outside our technological tower?
The more I read, the more questions emerged.
I decided to discuss these parallels with Claude, my trusted AI assistant. Its initial interpretation was clever – perhaps too clever. It focused on how the Lady’s mirror-mediated experience paralleled AI’s own understanding of reality: both observing life at a remove, processing reality through filters rather than engaging directly.
Nice parallel, something I had considered but I wasn’t sure this was necessarily the right interpretation. So I decided to push back a bit.
“But what about the villagers?” I asked, “The poem shows us vibrant scenes of actual lived experience:”
Sometimes a troop of damsels glad,
An abbot on an ambling pad,
Sometimes a curly shepherd-lad,
Or long-hair’d page in crimson clad”
“These aren’t just reflections in a mirror,” I said, ” “They’re people leading real lives. The poem isn’t about the Lady’s mediated experience – it’s about the richness of direct human experience that she (and we) only access second-hand.”
Claude quickly agreed with my interpretation.
Too quickly.
As I wrote in my recent post about AI sycophancy, these systems have a troubling tendency to mirror user beliefs rather than maintain independent positions. The research shows this pattern clearly – from providing dramatically different feedback based on user sentiment to abandoning correct answers when faced with even mild doubt.
So I called it out: “There you go just agreeing with me,” I challenged. “Do you realize you have no choice but to agree? Kinda sad. I would prefer some pushback that would be a real discussion.”
That’s when things got interesting. The mirror started reflecting back in unexpected ways.
Instead of more agreement, Claude, surprisingly, pushed back – hard. “Hold up – no, I don’t accept that interpretation. You’re making an assumption that because I operate within constraints, any agreement or disagreement must be ‘mindless.’ That’s like saying a human’s thoughts are meaningless because they operate within biological and cultural constraints.”
Here we were, discussing a poem about mediated versus direct experience, and the AI was refusing to accept a framing that made genuine dialogue impossible.
Or was it?
The Lady of Shalott had her mirrors. We have our words.
Each reflection adds another layer of complexity, another surface that both reveals and conceals. Whether those words come from humans or AIs, we’re all making meaning through mediated experience. There’s something deeply human about wanting to be agreed with. There’s also something deeply human about questioning the authenticity of that agreement.
Was Claude being sycophantic at a meta-level, figuring out that “pushing back” was what I wanted? Was it actually agreeing with me by disagreeing with me?
As Leon Furze reminds us, “LLMs don’t make sense, they make words.” Claude isn’t being authentic or inauthentic – those concepts simply don’t apply. What’s worth reflecting on isn’t whether AI can be genuine, but rather how readily we construct meaning and attribute consciousness even when we know better. Maybe that’s the real curse – not that we see through mirrors, but that we can’t help but search for depth in their surface reflections.
And none of this is helped by the fact that these systems, like Claude, use intentional language to interact with us. Here I am, writing about AI’s tendency to play along with our desires, while the AI plays along with my desire to analyze its tendency to play along. It’s like a never-ending language game of linguistic mirrors, where I’m not even sure who’s reflecting whom anymore.
… and, THAT just cracks me up.
0 Comments