One of my intellectual hobbies is thinking about artificial intelligence and animal minds. Though “artificial intelligence” is a constant topic of interest in the media and among the online-critic-class, it’s often discussed with a stunning lack of curiosity. The concerns are that AI, especially in the form of large language models, is going to render humans redundant. The idea is that one day you will have a robot-nurse and a robot-nanny, and humans will be left to rot, without meaningful work.
But it gets worse! One additional concern is that AI harbors malign intent toward humans. Maybe the chatbots intend to harm the people they serve, like in this example of a man who killed himself after receiving encouragement from a chat-bot. Perhaps the chat-bots have broader intentions, like taking over a workplace or a network and using it to rule their humans.
Either way, productivity, domination and destruction seem to be the primary goals of artificial intelligence when they are plotting their futures. When “left among themselves”,1 “the robots”2 talk about plans for world domination. They never talk about going to Coney island to ride the rollercoaster or maybe grabbing one of those tables with the checkered tablecloths at an old fashioned Italian place and eating their weight in spaghetti.
Chatbots don’t have bodies, technically, nor stomachs, so am I being glib with these examples? I don’t think so. Why is it that artificial intelligence is always plotting to overthrow the world? Is it maybe because they are getting this idea from somewhere?
Starting at least with George Orwell, the popular vision of automated technology has been of a threatening presence that seeks to oust human uniqueness. Automated assistants always seek to outpace and replace their human overlords; they never are interested in other things, like pleasure or delight or play. Jump roping figures low on the priorities of today’s AI. Why is this?
Is it because we humans are constantly interested in things like how to be more productive and how to destroy things more completely? Is it because today’s AI being trained on our insane, unthinking apocalypticism? Have they been developed as part of the broader worldwide war machine that technology always underwrites?
I think we need to consider the hysteria about AI as a mirror and not a threat. Are “the robots” threatening to do things that we are also always threatening to do— like making work obsolete, or destroying industries or each other?
These robots in our cultural imagination have two potential roles: they can make us more productive, or they can destroy us. So much of our conversation about the project of being human circles the drain of these two objectives, as well. But there are other options.
It’s not, of course, as simple as starting to tell artificial intelligent devices to “make art”; that’s been tried and they’re terrible at it. But here, too, is another key- for “making art” can never be separated from wonder and delight. Never! Artists are notoriously cranky and profligate people; they have to be! Wondering at the beauty that exists in this world where there is a rupture at its heart, where life and death are separated by a thumbnail, where the veins in our skin can literally be seen from the outside- all of this reminds us that death is so near. And death and suffering are often the subjects of great art;3 because they remind us that we are human.
So what does this have to do with animals? What if we get more clarity by asking not whether machine minds are like human ones, but whether they are like animal ones?
One of the random tidbits that I am often pulling out is that elephants have funerals.4 It goes without saying that elephant memorial services are not like human ones, but this is not terribly relevant- as with T. Nagel’s famous argument regarding “What is it like to be a bat”, there is simply no way to know how elephants know what they know, or how they know it. We’ll never know what it is to be an elephant from an elephant’s point of view!
But a few things animals certainly do have, that machines do not. They demonstrate perspective taking, which allows them to also have forms of collective action, or “politics”.5 They imitate one another in ways intended to gain status among their social group. Chimpanzees have been observed copying a more dominant animal’s new hair style.6 And they also grieve the losses of their friends. In their tears at the deaths of their friends, animals make a moral claim on us. Their memories can look back, not simply ahead. They can remember, and can memorialize what they’ve lost. Put more simply, they can suffer.7
The idea that technological development is always invested in bringing a “brighter tomorrow” links it to a form of millenarianism that is deeply religious.8 The idea that man might “become as gods” in the technological present has become a certainty, even a project. Everything has become a project! We cannot conceive of man without a project.
But as Man increasingly becomes conflated with A Project, we are losing those aspects of humanity that make religion possible at all. Things like grief, and suffering, and the kind of hope that limps a little.
In The Shallows, Nicholas Carr writes that “The process of our mental and social adaptation to new intellectual technologies is reflected in, and reinforced by, the changing metaphors we use to portray and explain the workings of nature” (50). Carr notes how “God became the Great Clockmaker” (50), as that technology itself was broadly adapted. Once clocks were used to track time, “time” became an unrelenting external reality. In our time, I fear God has become a Machine, a grand repository of data that will grant us eventual peace, or doom. He’s been turned into a therapeutic option, where we ask him for certain forms of wellness or healing and he is bound to deliver them. Instead of immanentizing the eschaton, we’ve internalized it.
Rod Dreher’s new book tries to get people to re-engage with God in “a secular age”. But it’s not clear to me that a technologically obsessed culture like ours can be said to be a “secular” one. I’m just not certain that believing in vaccines or in automation are not themselves religious beliefs, albeit misguided ones that have jettisoned suffering in the process. Religion without suffering might be the best description yet of technological man. It is at least one of the things that separates animals from machines.
If you haven’t yet, give a listen to this podcast, where we discuss some of these concerns.
there are so many odd ways of speaking about AI and the intelligence it has and how it operates; this idea that there is a spatial aspect to how the intelligence operates is one example.
even what to call the personification of AI is super clumsy, which is a hint that the whole thing is ill-conceived. I am collapsing my observations about “AI” and large language models, though they are logically separate and discrete entities.
Cimabue’s crucifix is the best in class, I will be taking no questions.
Here’s a short and deeply sad explanation of the process: https://www.harpercollins.com/blogs/harperkids/the-five-animals-that-grieve#:~:text=Elephants&text=They%20bury%20their%20dead%20and,one%20another%20with%20their%20trunks.
for examples of this, see Frans deWaal, Chimpanzee Politics: Power and Sex Among Apes (2007).
Frans de Waal, Are we smart enough to know how smart animals are? (2017).
this has been one of the dividing ethical lines for those who argue for animal rights. maybe more on this later.
ive been low-key obsessed with this thesis from David F. Noble’s The Religion of Technology: The Divinity of Man and the Spirit of Invention (1997).
Good post. It is difficult to unblur the distinction between man and machine. We have so normalized it. We have forgotten who we are.
Mary Midgley has observed how we began using machine imagery for ourselves as early as the seventeenth century.
We constantly live with machines, trusting in them for they offer (and even what they don’t), and become like them. We think in their terms rather than God’s. And so we are reduced to productive cogs and wheels instead of cognitive wholes.
Migley says: “the reductive, atomistic picture of explanation, which suggests that the right way to understand complex wholes is always to break them down into their smallest parts, leads us to think that truth is always revealed at the end of that other seventeenth-century invention, the microscope. Where microscopes dominate our imagination, we feel that the large wholes we deal with in everyday experience are mere appearances. Only the particles revealed at the bottom of the microscope are real. Thus, to an extent unknown in earlier times, our dominant technology shapes our symbolism and thereby our metaphysics, our view about what is real. The heathen in his blindness bows down to wood and stone—steel and glass, plastic and rubber and silicon-of his own devising and sees them as the final truth.”
We are persons. We cannot be reduced to atoms, no matter what our daily experience of tech might tempt us toward.