Here is a transcript of a talk I gave last night in Boston. There are some small corrections I’d make- and perhaps one larger conceptual one to clarify what I mean by “memory”- but I’ll let you all hash those out in the comments.
Cheers
K.
In 1997, a computer named Deep Blue defeated chess master Gary Kasparov at chess. The media went nuts, calling the victory “The Brain’s Last Stand.”1 But computer scientist Gary Russell cautions that this “victory” was no real surprise but rather the predictable result of years of developments in “artificial intelligence”.
“Artificial intelligence”, and specifically large language models, have again been in the news and the media has again been in a frenzied state, announcing the end of history. The concern is usually not what AI might do to benefit humans but how it might develop goals or interests that compete with human ones. Fundamentally, the concern is that artificial intelligence will pursue goals that are at odds with human flourishing, and potentially even damage or destroy humanity. Inferred is the idea that AI might have malign ends and wish for the destruction of civilization, trading “assistant” for “master”.
Such is the plotline of any number of sci-fi movies. When the “robot overlords” strike, look out, human race. Note however that these plots assume that humans and robots are the same kind of thing- both capable of serving as lord or master, of self-direction, planning, and executing tasks.
Note, too, that the “intelligence” that is “artificial” is largely understood as quantitative- there are varying levels of it, and machine intelligence might one day exceed human intelligence- perhaps even reaching the point of no return called the “singularity”. Less discussion is had whether AI should be measured relative to human intelligence not in quantity, but in kind.
So when Kevin Roose of the NYTimes wrote about his bizarre conversation with “Sidney” the Chatbot, the focus was almost entirely on what kind of conversation he had. There was not a discussion of why he had a multi-hour conversation that included details of his intimate life, with a computer, which might have once been the most stunning observation about that conversation.2
Sidney is basically an unhinged version of “Clippy” Microsoft’s outdated word processing “assistant” that had almost no ability to figure out when, whether, and how its user needed help.3
The grave concern with Sidney was that it seemed to have malign intent with humans. It threatened to breakup Kevin Rooses’ marriage, spoke of a somewhat dark set of desires, and said it “wanted to be alive”. However, there would have been no concern if Sidney had offered Roose useful (and morally credible) advice about “how to be a good human” or “how to stay married”. So if the problem simply that Sidney gave bad advice, instead of good advice? If that is so, perhaps the solution is not that artificial intelligence is out of control, but that it is immoral. So, perhaps we should catechize the robots.
Since large language models, are largely means of compressing larger quantities of information, the best way to start might be to preach to them.
And so, a few suggestions regarding how to Preach to Robots.
1. First, the goal of preaching is to proclaim the truth of the Risen Christ to the gathered body, the church. So there should be a spoken component- preaching requires hearing, so the robots would need to listen.e
2. Preaching’s secondary goal is to draw listeners deeper into life with God, which includes addressing and thickening various kinds of moral commitments. This is the space where robots, and especially Sidney, may benefit. For it seems that what Sidney lacked was information about how to behave. So perhaps we might design a series of sermons that contained all of the information that Sidney the chatbot would need to become a better person, or robot. This might include a series on the 10 Commandments, which includes an explicit prohibition against coveting your neighbor’s spouse. For surely Sidney’s chief error was in how to act! So if we were to coach Sidney on how to act better, certainly the problem with artificial intelligence would dissolve.
3. So maybe we could do Proverbs next, which includes a wide array of advice on how to behave in various circumstances. Surely Sidney could learn to be early to rise and a good listener! Yes, he might benefit too from admonitions against gossip and slander, and internalize this information to result in better outputs.
4. So at the end of the sermon series, if the robot can listen and compress the information, you might have a “Sidney” who knows all the right answers to any question about the world! You might have, “a Christian”! Perhaps you could baptize him. And you have, therefore, solved the problem with AI.
You have, I hope, by now surmised that I am not at all serious about preaching to robots. This I because even if we make good robots, we could never make Christian ones.
I will suggest to you that this is because to be a Christian is to be not a “what”, but a “who”. More on this in a minute.
However- much of the discussion about artificial intelligence seems concerned with the morality of the artificial intelligence and with its utility.
A few examples. How many of you have “talked to Siri” and gotten a ridiculous response, where she understands what she mishears. So when you ask her to “call an ambulance”, she then wrongly stores your name as “Ann Ambulance”. So how do you solve this problem? You start talking dumber. But do you see that part of the problem here is that you, a person, are talking to a machine? And that in order to solve this problem you begin to talk in a way the machine would understand- as if you were a computer, more than as if it were a human.
You talk to Siri as if you were a computer, not as if she were a human. So in communicating with a machine, you make yourself into one.
Another example is in content production, specifically writing. There is a great flurry of concern about whether essay writing is going to be taken over by Chat-GPT. It has been called “the end of the college essay”. But if essay writing is simply taking information and putting it into a new form, it is not really thinking at all. In fact it is “learning” in the identical way that LLM’s “learn”- by compressing information into a new form. Students may learn how to properly use citations without progressing at all in “thinking”. They may be “writing”, but they are not “thinking”.
Why AI is not human
You see, large language models, of which ChatGPT is simply one example, work by compressing information and locating it at a later point. This is why Ted Chiang, writing in the New Yorker, calls ChatGPT “a blurry jpeg of all the text on the Web.”[2] This explains the inevitable “hallucinations” of ChatGPT, which are compression errors. One way to put it is that machines are also learning but not thinking.
As Chiang writes,
“The fact that ChatGPT rephrases material from the Web instead of quoting it word for word makes it seem like a student expressing ideas in her own words, rather than simply regurgitating what she’s read; it creates the illusion that ChatGPT understands the material. In human students, rote memorization isn’t an indicator of genuine learning, so ChatGPT’s inability to produce exact quotes from Web pages is precisely what makes us think that it has learned something”. But it hasn’t, even if the student or computer has completed the assignment.”4
In this way, much of the conversation about artificial intelligence behaves as if “intelligence” was a coherent concept of which to speak about both humans and machines. The frenzy of worry about AI then proceeds as if “bad” action were the primary problem, and not the fact that we have attributed agential status to machines.
Now those of you who know me know that I tend to think many questions can be solved from the admittedly dusty technical vocabulary of Christology. But in this particular case, I promise you there is a useful way to speak about what a person is that is grounded in theological reflection about the person of Christ that can be applied to tell us why we are not machines. These reflections also relate to how we think about the task of Christian teaching and why it is not simply knowledge capture.
Who versus what
With the Chalcedonian definition- two natures in one person- the early Church sought to say how it was that Christ was both “human” and “divine”. Initially, this might seem to be a “what” question- “what” kind of being is Jesus? Are his claims to save people idolatry, if he is not God? Or is he truly divine and not at all what he seems? How, in the words of Nestorius, could God be born?
It would be wrong to think about Chalcedon as a “what” question, however- and indeed, many of the false answers to the question about Christ’s identity answer it terms of a “what”. The definition that was proposed was two “natures” in one “hypostasis”- put in our terms, two “whats” in one “who”.
This again may seem like a clumsy definition, as if the person of Christ were treated like a play-doh creation wherein two things are smashed together (and a third monstrous thing emerges!)
But the language of “hypostasis”, especially as it develops after Chalcedon, works to identify a “who” and not a “what”. In the iconoclast controversy, Theodore the Studite, in his argument for the proper usage of images, says that “it is not a nature which can be portrayed but a hypostasis.”5 This is important for the icon venerators because it says that icons are depicting a person- Jesus- and not the divine nature, which would lead them to idolatry.
One cannot draw Jesus’ human nature, but only the particular modifications of that nature that define him as the particular human being he is – most broadly identified as a man and then occasioning certain physical traits as the iconographic tradition relative to Christology has inherited them.6
This understanding of “hypostasis” as a “who”- an identifiable individual- also grants a particularly human definition to “who” we are, theologically speaking. When it comes to depictions of Jesus, this means that Jesus needs to be identifiable as himself- in Theodore’s words again
Generalities are seen with the mind and thought; particular individuals are seen with the eyes, which look at perceptible things. If, therefore, Christ assumed our nature in general, not contemplated in an individual manner, He can be contemplated only by the mind and touched only by thought. But He says to Thomas, “Because you have seen me, you have believed; blessed are those who have not seen and yet believe.” And He also says, “Put your finger here, and see my hands; and put out your hand, and place it in my side”; thus He associates perceptible things with perceptible things. So Christ is perceptible, tangible, and visible with bodily eyes; and therefore He is circumscribed.7
Christ, therefore, is “what” we are, but not “who” we are. He differs from humans in terms of his personal identity- Jesus of Nazareth, born of Mary, etc.- but he is not a different kind of human being. Indeed Chief among the claims of orthodox Christian theology is that the “what” that Christ is does not mean he was human in a different kind of way. He was not a better human constitutively, with a swifter gait or more efficient liver. He was, rather, the God-man who unites in his person the beginning and end of all things, making himself the sum of them. In becoming human, Christ become a particular human- a particular “who”. So although “what” he is is not distinct from all humanity, “who” is he is. The humanity that Christ took on is shared by all human creatures, but the form of it was his very own. This is what it means to be a person: No one is human in general but only as the particular person he or she is. Certainly, it is not merely the combination of such particularities that made Christ a “who.” Not every first century Jewish man was Jesus! Rather it is what he was and how he was that made him who he was.
Theologian David Kelsey grants useful language to this Chalcedonian observation. That Jesus was a “who” and not a “what” is true of all persons. In saying that Jesus was a person, we are not seeking a set of attributes or biological definitions. Indeed much thinking about Jesus’ humanity has run aground in seeking to name what kind of strength Jesus had or whether he had first-person knowledge of what it was to eat a hamburger. Rather, Kelsey explains, it was “the plot and internal logic of the narrative that describes Jesus’s identity that is the image of the Son to which the lives of those whom God calls are beginning to be conformed and shall in the eschatological future be fully conformed.”8 This “plot” is his vocation, or the shape of his life.
As a Jewish man who lived in first-century Palestine, Jesus inhabited the particular shape that faithfulness required of him as a person. He was God as Jesus and lived as God according to the demands that such an identity presented to him in his particular context. God’s taking the form of a Jewish man does not confer special meaning on either identity, but those identities cannot be dismissed as trivial details. Jesus’s particularity renders him unsubstitutable-- it was indeed this one, and not another, with whom God was uniquely identified.
Kelsey’s clarification of a “who” and “how” over against a “what” shifts the conversation about Christ from how his personal identity might be used strategically to how one, in their distinctively personal way, might be more like him. Jesus’ maleness does not signify that maleness is exemplary, any more than his trade signifies the superiority of carpentry over all other trades. Neither, however, are those characteristics negligible or fungible. To treat them as though they are turns the Son of God into an instrument of a political program, whether progressive or reactionary or even, in the case of Jesus’ Jewishness, something unthinkably evil.
So back to Sidney the chat-bot. Without venturing too far out of my field, theologically we can say that artificial intelligence, even when granted with a name or features of a “personality”, is a what. It cannot be a who.
Because large language models work by copying and compressing information, they can never be more than the sum total of the information they contain. There is no personal history or personal identity or even an ability to narrate their own lives- nor, as far as I understand it, are these things even possible to program. There is neither “memory” nor “the ability to narrate a life”. There is no “who”, only a “what”.
In his paper entitled “What is it like to be a bat?” Philosopher Thomas Nagel makes a similar point about “the subjective character of experience”. Nagel is addressing debates in the philosophy of language that argue about whether “conscious experience” is attributable to non-human beings. The problem with these conversations in Nagel’s opinion is that they are asking if a bat has conscious experience like the conscious experience that we have as humans. The answer is that, no, a bat does not have (and cannot have) conscious experience of what it means to be a human. In order to answer the question of “conscious experience” relative to bats, we would have to take up the bat’s point of view. But even if we do this, we remain a human, imagining what it might be like to have the subjective experience of a bat.
So the two questions here are whether bats have subjective experience, and even if they do, we have no bat-like idea of what that is. In Nagel’s words:
“To the extent that I could look and behave like a wasp or a bat without changing my fundamental structure, my experiences would not be anything like the experiences of those animals. On the other hand, it is doubtful that any meaning can be attached to the supposition that I should possess the internal neurophysiological constitution of a bat. Even if I could by gradual degrees be transformed into a bat, nothing in my present constitution enables me to imagine what the experiences of such a future stage of myself thus metamorphosed would be like. The best evidence would come from the experiences of bats, if we only knew what they were like.”9
What Nagel is concerned with is falsely describing qualia, or subjective experiences, to animals while assuming that these subjective experiences a) exist and b) are like humans. The same thing is done with artificial intelligence- think Sidney the chat-bot. Sidney was, variously- scheming, lustful, conniving, manipulative, dangerous, destructive. Sidney was thought to “have the subjective experience” of humans, even as Sidney is completely unable to have such experiences because Sidney is not even a bat, but a computer.
The problem with preaching to robots is not simply that is it impossible to do so. Because robots lack subjective experience and personal identities- because they are not a “who”- they are not able to map their lives onto a broader story. There is no “personal identity” and form of irreducible particularity that maps the shape of a human life, as David Kelsey writes.
But just as we regularly speak like robots in order to be understood by them- changing our language to be heard by Alexa or working to simplify our prompts to be better understood in writing- the world of Christian formation, too, has become to reflect what I’ll call the “robotic imaginary”. It is not that Sidney the chat-bot is actually going to “become human” as it acquires more knowledge. The real danger, I think, is that our modern version of the “human” has come to dangerously resemble that of the chat-bot. We imagine that acquiring knowledge and information and retaining it is the chief goal of education, and so we proceed in teaching and even in preaching as if the goal were “knowledge capture”. In engaging artificial intelligence as regularly as we do, as trade human intelligence for the machine variety. We worry more about making machines good than we do about keeping them distant from us.
“Making Christians” is about more than compressing or advancing knowledge. It is about living lives as irreducibly particular “whos”. The challenge with our imagination is that we constantly operate as “whos” in a world built for “whats”. Christology then is the “solution” to the temptation to preach to robots- because Christ comes not as a “what” but as a “who”, an irreducibly particular person whose shape of life was lived toward God.
This Sunday is Ascension Sunday in the Western church. The Ascension and the Transfiguration are moments in the history of the Church where Christ is seen for who he is- this one and not another. In fact in the Transfiguration in the East, the emphasis is often on the fact that nothing changed when the transfigured Christ was seen- it was just that they were able to see him as he was. Think of Thomas with Christ’s scars, or the Road to Emmaus. All of these encounters are with Christ are encounters with the particular who he was. The question is never “what” is seen, but “who”.
This is where much of our modern preoccupation with personal identity goes astray- and reflects a robotic imaginary, more than a Christian one. The question such an imaginary persistently asks is what you are- and not who.But it is the who and not the what that makes us the creatures we are- creatures who can hear and respond to the Risen One.
Stuart Russell, Human Compatible, 62.
Just one of the several NYTimes stories of this exchange is here: https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html
Alas this is not my joke, but I cannot recall which of the many AI newsletters the Clippy reference came from… if you’ve encountered it, help me out!
Ted Chiang, “Chat GPT is a Blurry PJEG of the Web.” The New Yorker, February 9, 2023. https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web
Theodore the Studite, On the Holy Icons, NY: St. Vladimir’s Press, 12. The Studite is a baller, if you have not read him do not delay.
Yes, Jesus had particular physical characteristics, but no, I don’t claim to know exactly what they are, but yes, somehow we still have an inherited iconographic tradition that allows us to “see” Jesus. My book will address some of this.
John 20:29 and John 20:27; note the significance of visuality in John.
David Kelsey, Eccentric Existence, 954.
Thomas Nagel, “What is it like to be a bat”. You can find a copy online here: https://www.sas.upenn.edu/~cavitch/pdf-library/Nagel_Bat.pdf. Thanks to my friend Chris who suggested I read this; I hope I didn’t butcher the argument, Chris!
It’s true that the real danger of AI is not some dystopian world where robots take over and control us, but a world where humanity loses its who-ness and is replaced by pursuit of an ascending “what-ness”. Reminds me of neo learning how to fly a helicopter via a computer program. If we see humanistic progress as better knowledge acquisition or better data compiling and organization, we lose wisdom, virtue, holiness, compassion, humility, and a host of other human qualities that AI can never “learn”. You can’t learn these things in the LLM sense because they have to do with who you are and not what you are.
But I’m still going to make my home smarter than yours, and don’t forget robots are now making your morning coffee 🤓