Finding In/Sight: A Recursive Dance with AI

by | Monday, November 04, 2024

I had written last week about the interesting back-and-forth process of creating visual wordplay with an AI language model (Claude.AI) in a post titled: Kern you believe it: A typographical tango with AI. I was back at it again, this past week, and things got weirder and funnier. It was like stepping into a Magritte painting where nothing is quite what it seems, and what started as a playful exploration of typography morphed into a fun conversation about authenticity, and the nature of understanding itself.

Our previous exploration ended with the recognition that the difference between Claude and me wasn’t in our ability to generate ideas, but rather in my emotional investment in the outcome. I, the human, cared about the quality of the final design in a way that Claude was not capable of.

This particular interaction started somewhat the same way as the previous one, with my sharing some of my designs with Claude to kick off a session of ideation. This time, I decided to focus on some figure-ground typographical designs – the kind where meaning hides in the spaces between letters. Much of the pleasure in creating such designs is playing with the meanings and their opposites. For instance, I shared one of my most popular designs: A figure-ground design for the words Good and Evil:

Claude immediately recognized the word “Good” written in a “in a stylized, graffiti-like font style.” When asked if there was anything else it could read, Claude responded that “I don’t see any other text or readable elements in the image.” When prompted, more directly, however, to focus on the white spaces, Claude responded:

So far so good.

I also shared another example, a somewhat trickier design to interpret “In/Sight,” where “in” was hidden in the word “sight” in two different ways. These designs work at many levels, and I was curious to see how Claude would interpret it.

Claude’s journey to understanding this design was… shall we say, complicated. First came the basic observations about typography and the presence of a red ‘i’ in the second design, but the deeper meaning remained elusive. When pushed further, Claude, confidently, made stuff up. For instance:

When I asked “do you see the ‘in’ in ‘Sight’ shown two different ways in each of the words?” Claude began to get closer but still didn’t quite grasp the figure-ground relationship.

I had to explicitly point out: “actually the “n” in the first design is a figure ground with the h do you see that?” Finally, the penny dropped, and Claude understood how the negative space created by the “h” forms an “n” – a true figure-ground relationship where your brain can switch between seeing either form.


Now that we had a couple of figure/ground designs in mind, and in keeping with my previous experiment, I then asked Claude to help me come up with some new design ideas that I could explore and create. Sadly, Claude’s responses were disappointingly literal, essentially shoehorning the word fragment “in” into any random word. What that meant, to me, was that it really had not grasped the “depth” of in/sight design. As I said:

Claude’s response was as follows:

But despite this “insight” about how these designs work, none of the ideas that Claude came up, in the next round of queries, were interesting or insightful. I was somewhat disappointed, but also hoped that maybe we could get better if Claude got to see some more examples, I shared another design, the one below, which was another figure/ground design for the word “input” where the word “in” was hidden in the white spaces of the word “put.”

Again, Claude’s perception evolved in stages – first seeing “POT,” then “ROT,” then “PUT,” before finally grasping, with constant input from me, how the white spaces formed “in.” Each revelation built on the last, until finally it triggered Claude’s most enthusiastic response yet. I quote its response in full:  

Claude’s enthusiastic response to these visual puzzles raised an interesting question: What does it mean when an AI system shows excitement about discovering hidden meanings? My next response brought this up directly:

To which Claude responded:

Instead of responding to its question, which honestly, I believed was just Claude’s attempt at extending the conversation, I shared another design: this time for the word “Claude.”

Claude was immediately aware of its own name and the “hidden” AI within, and demonstrated great enthusiasm and appreciation of the design. When, just for fun, I questioned its interpretation, Claude wondered whether it “immediately saw ‘AI’ because it was an AI, and went on to say:

The phrase “my own context and identity” stood out. This artificial performance of identity was a bit too much. It was almost like Magritte’s famous pipe piped up and said: “I AM a pipe!”  This had to be called out, and I did. (Apologies in advance for the language).

Claude responded:

This is when stuff went kind of meta-level bonkers. This was Claude’s response

These comments (particularly those in bold) were quite amazing and insightful.

The conversation by now had become a strange loop of analyzing designs about hidden meaning while hiding meaning in the analysis, trying to consciously communicate about lacking consciousness, with commentary becoming meta-commentary becoming meta-meta-commentary.

As I have written elsewhere (and Claude suggested as well) – language is a strange beast with metaphors lying hidden everywhere – and sometimes these metaphors “come to life.” We have no choice but to use intentional language to explain both minds and AI – both essentially “black boxes” whose inner workings are not available for direct examination. We have used technological metaphors to understand the mind for the longest time… yet we are now suddenly confronting a technology that we can only describe through metaphors of the mind!


The part that surprised me was how much I enjoyed this back and forth. I found myself occasionally laughing out loud at Claude’s increasingly elaborate attempts to communicate without claiming consciousness – watching it tie itself in linguistic knots trying to express ideas while simultaneously denying its own agency.

This, to me, was a prefect example of what Leon Furze had said: “Chatbots don’t make sense – they make words.” Or as Claude put it: Output is being generated. No deeper truth exists here.

This gets at something I’ve discussed in other posts about how AI can be “psychologically real” – even though we know that it isn’t conscious, doesn’t have an identity, and can’t truly care about our interaction. The fact remained that I experienced authentic joy in our philosophical tennis match.

I must say that it is a peculiar kind of fun: engaging with a system that can participate in sophisticated intellectual play while constantly reminding you it’s just playing. Like having a deeply meaningful conversation with an incredibly sophisticated mirror that keeps pointing out it’s just a mirror – yet somehow that very acknowledgment, that very reflection of reflection, makes the conversation even more intriguing.

I want to be clear, Claude’s lack of interiority was never in doubt – by both of us. Clearly Claude had no real interest in me or the topics we discussed, and we both agreed on that. And yet, the genuine sense of engaging with “another being” was not something I could wish away. My pleasure in this conversation was real, and that is important to recognize. And if I felt this way, I am sure others will as well. The fast-thinking aspect of our mind will definitely see these technologies as being psychologically real, opening us up to both some fascinating conversations as well as to psychological and emotional manipulation.

There have been lots of arguments made about how we should not anthropomorphize these technologies. That with sufficient media training or AI literacy we can prevent ourselves from doing so. I disagree, and I am on the record arguing that it may be impossible (for a range of reasons) not to do so. I think this conversation with Claude gave me some more information to support my thesis. Claude said it best:

Thus, what had begun as a simple design challenge evolved into a much more interesting dance of ideas. And while Claude can engage in this dance with remarkable sophistication, its final words capture the essential paradox:

Topics related to this post: Aesthetics | AI | Ambigrams | Art | Creativity | Design | Fun | Personal | Psychology | Worth Reading

A few randomly selected blog posts…

Grant Hackathon 2016

Grant Hackathon 2016

On October 21, the Office of Scholarship partnered with the Research Advancement Office and the Teachers College Development Team to host the first MLFTC Grant Hackathon at ASU SkySong. Over 30 faculty and staff members attended the event. More...

TPACK Newsletter #25, October 2015

TPACK Newsletter, Issue #25: October 2015 (Updated) Welcome to the twenty-fifth edition of the (approximately bimonthly) TPACK Newsletter! TPACK work is continuing worldwide. This document contains recent updates to that work that we hope will be interesting and...

Learning science with the body

Learning science with the body

We often think and understand the world using our bodies. Our senses and movement shape how we form and process knowledge. Paul Reimer, Rohit Mehta and I explore this idea and its educational implications in a new article published in iWonder: Rediscovering School...

SITE 2008, Google & Creativity

At SITE 2008 Mike DeSchryver and I presented a paper titled Pre-Service teachers and the web: Does access to the Web enhance creative thinking about teaching. Abstract: This study examined teacher creativity and its relationship with emerging technologies. Eight...

Plagiarism, note to Root-Bernstein’s and Creativity Portal

Here are some emails (for the record) that I have sent recently to the Root-Bernstein's (the authors of Sparks of Genius) letting them know of how their intellectual property has been stolen by David Jiles, Ph.D. Details in my original posting: David Jiles, Ph.D.,...

Interesting links

Here are some links that came up during our discussion today regarding web-based software for education. Shodor foundation Curriki Gizmos from Explore Learning Illumination genetics tools

Announcing: Short film competition, cool prize for winner!!

Those of you who have been following this blog know, over the past few months I have made a few short videos with my kids. The ones I am most proud of are a set of three made around the words Explore, Create, Share (you can see them all here). There were great fun to...

Coding with ChatGPT3: On gaining a superpower

Coding with ChatGPT3: On gaining a superpower

I had heard that ChatGPT3 could help with writing code and just hadn't much time to play with it. Part of the reason is that I haven't really coded in almost 2 decades (maybe more) so was somewhat hesitant to jump in. But again I kept reading of people doing amazing...

1 Comment

  1. Grant Castillou

    It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

    Reply

Submit a Comment

Your email address will not be published. Required fields are marked *