Confabulators, Persuaders, and Imagination Catalysts: An Expert Perspective on the Chatbot Phenomenon

In all my years as a researcher, I’ve never had so many friends and family members asking me about AI – chatbots, in particular. Even people that I would have described as fairly disinterested in tech in general have shared with me their experiences interacting with ChatGPT, or expressed that they are fearful and/or intrigued by what they’ve seen or heard. Knowing my profession, these people will often ask me what I think about all the hoopla. This blog post is my attempt to provide some perspectives on the chatbot phenomenon.

Chatbots as Confabulators 

Imagine that a patient, in responding to a doctor’s query about what he has planned for the evening, confidently and matter-of-factly describes his plans to take his wife out on their yacht for a cruise around his private lake. Except that there is no wife, no yacht, and no lake. The patient has been engaging in what is called ‘confabulation’.

In a neuroscience/psychological context, confabulation refers to the phenomenon in which a patient offers up narratives that seem superficially plausible, but are untrue, like the man in the above anecdote [1]. Confabulation isn’t quite the same thing as lying, because the speaker doesn’t have any intent to deceive – they appear to be both dead certain about the things they say, and to lack awareness of knowledge that would contradict their claims.  Chatbots are very much like the patient above: generating coherent, plausible-sounding content with no necessary relationship to actual facts.

Confabulation is a phenomenon that frequently shows up in right-hemisphere stroke victims (but not in left-hemisphere strokes!) leading some neuroscientists [2] to claim that it tells us something profound about the human mind: the left hemisphere is a gifted storyteller, adept at generating plausible narratives, but only the right hemisphere is able to validate those narratives as being empirically grounded and relevant. This similarity between chatbots and right-hemisphere stroke victims is uncanny.

Chatbots as Persuaders, not Truth-Seekers

A serious and ubiquitous concern about chatbots is that we can’t reliably tell when they are telling the truth or just making things up. So there exists great uncertainty around the question of whether chatbots can be trusted – do they have the capability to reason effectively and to discern truth from fiction?  

I want to offer an alternative perspective on the question of reasoning: chatbots (or more precisely, their underlying large language models) are trained on a significant percentage of humanity’s writings, so chatbots are going to reflect humanity’s biases with respect to reasoning. I submit that human reasoning isn’t what we think it is. Specifically, there is growing evidence among cognitive scientists that the capacity for reason “…evolved for convincing and persuading other people, winning arguments with other people, defending and justifying actions and decisions to other people” [3]. Human beings are social organisms whose success depends on effective interactions with others. We have evolved to be skilled arguers, searching for any argument/position that supports our views, regardless of its truth value (this, by the way, explains why confirmation bias is so common in human discourse).  

Argumentation, persuasion and justification are completely orthogonal to truth-seeking; from an evolutionary standpoint, a person can still benefit from convincing others of the truth of a claim or assertion even when it is false. Chatbots are the inheritors of this deeply human heritage.

Chatbots and ‘Skin in the Game’

Interactions with chatbots are not true conversations simply because chatbots have no skin in the game. As Taleb says, skin in the game “… is about symmetry, more like having a share in the harm, paying a penalty if something goes wrong” [6].  Because chatbots are fundamentally isolated from the consequences of their outputs, I maintain that we’re making a category error by treating our interactions with them as true conversations.  

Support for this position comes from linguistics research that is aimed at characterizing the nature of human conversation [4]. The key concept here is that of “intersubjectivity,” which is defined as an “…activity in which two or more persons (or agents) participate, and in which participants are socially accountable for their participation” [5].  When a person is interacting with a chatbot, there is no true subject on the other side of the interaction because the “tyranny of accountability” is absent: there is no possibility of praise or blame, of being called to consequence, etc.

It is possible that somebody could advance the argument that a chatbot interaction is a conversation with humanity as a whole (due to how a chatbot is trained), but that abstraction doesn’t change the fact that there’s nobody home on the other side – a chatbot cannot assume moral agency, and in fact has no conception of what the concept even entails or what it means to a human to have stake. What’s taking place during a chatbot interaction can therefore best be described as “performative intersubjectivity,” and it shouldn’t be mistaken for the real thing.

Chatbots and The Matthew Effect

The “Matthew Effect” is a term coined by sociologists decades ago to account for the common phenomenon where inequality increases: “the rich get richer and the poor get poorer.” With the rise of chatbots, I predict a similar dynamic taking place online. Here’s how it works: use an existing chatbot to generate as much text as you possibly can espousing your causes/beliefs/propaganda. All of the text you produce gets hoovered up by the next generation of chatbots during their training phase, thereby influencing the outputs of future chatbot interactions, and increasing your mind share/presence. Rinse and repeat…  

This isn’t a fanciful situation – the process of strategizing about this prospect has already begun. Andrew Torba, the CEO of the social networking site Gab recently tweeted “Do you understand the new reality we live in now? I can run a large language model… create 100 Twitter accounts… and have it tweet pro-Christian Nationalist propaganda all day long in reply threads of journos and celebrities… What happens when thousands of us are doing this and they can’t stop it? We can utterly dominate the entire normie regime web. Are you getting it yet” [7]?

Regardless of your political loyalties, this seems like a less-than-desirable scenario: an arms race in which there are strong incentives for actors/institutions to influence and manipulate the behavior of the most commonly-used chatbots, as well as the further proliferation and splintering of subcultures, each aligned around its favored chatbots. In a related context, the writer Cory Doctorow coined the term “enshittification,” which seems like a fitting description of the end game here for humanity’s written heritage.

Chatbots and Unintended Consequences

The above perspectives, especially in the aggregate, might suggest that I hold a relentlessly negative view of the consequences of chatbots and similar generative tools. But I don’t think it’s as simple or one-sided as that. The history of technological progress strongly suggests that in any complex human-designed system/technology, the unintended consequences will invariably come to dominate the intended ones. And although the phrase “unintended consequences” typically has a negative connotation, it doesn’t have to be that way. There’s a Wittgenstein quote that is appropriate here: “Uttering a word is like striking a note on the keyboard of the imagination,” and that statement resonates with me. My candidate for the most desirable perspective on chatbots is to conceive of them as “imagination catalysts”: I believe that one of the primary unintended consequences of generative technologies is a ‘Big Bang’ of creative practices in all of the fine and performing arts, to a degree that we can’t possibly anticipate. That’s an unintended consequence worth looking forward to.

So even though this post has focused on articulating concerns and reservations about the rise of chatbots, in the end, there is nothing inevitable about any potential future scenario. I’m very fond of the quote “the best way to predict the future is to create it,” which captures the spirit of our work at Galois. 

References

[1] Hirstein, William. (2012).  “Brain Fiction: Self-Deception and the Riddle of Confabulation”. MIT Press.

[2] McGilchrist, Iain. (2009). “The Master and His Emissary: the Divided Brain and the Making of the Western World”. Yale University Press.

[3] Mercier, Hugo and Sperber, Dan. (2011) “Why do humans reason? Arguments for an argumentative theory”. Behavioral and Brain Sciences 34, 57-111

[4] Dor, Daniel. (2015). “The Instruction of Imagination: Language as a Social Communication Technology”. Oxford University Press.

[5] Enfield, N.J. (2022). “Language vs. Reality: Why Language is Good for Lawyers and Bad for Scientists”. MIT Press.

[6] Taleb, Nassim N. “Skin in the Game: Hidden Asymmetries in Daily Life”. (2018). Random House.

[7] https://twitter.com/RightWingWatch/status/1635655219848835074