News & Information

Human Being Is Not a “Very Small Phenomenon”

Martin LeFevre:  The problem with the commentaries on AI that I’ve read is not a failure of imagination, but a failure of perception. Both AI experts and the plethora of commentators on AI don’t seem to have insight into thought. So how can they have insight into the emergence of ‘sentient’ thought machines, much less consciousness without thought?

The reason that people like me are writing a lot about the explosion of AI capability is not just to warn of its dangers from its use in militaries and misinformation politics (as if the perils of both aren’t sufficiently dire as things are). The implicit and even explicit reason is that the prospect of “general artificial intelligence” – AI with cognitive abilities equal to or greater than the humans that made them – is driving home ancient questions about what it means to be human.

Here’s an example of the philosophical confusion surrounding the emergence of “LLMs” (large language models like ChatGPT) by Stephen Thaler, who has conducted artificial intelligence research and development for decades.

As reported in the national media, “Dr. Thaler’s describes his system as “having the machine equivalent of feelings, since it becomes digitally excited, producing a surge of simulated neurotransmitters, when it recognizes useful ideas.” This sets off, in Thaler’s words, “a ripening process, and the most salient ideas survive,” which he says confirms an ability to recognize and react in that way amounts to sentience.

First, there is no philosophical or scientific agreement on what sentience is. The three standard definitions vary so much that they should each have a different word for them.

The first definition of sentience is “responsive to or conscious of sense impressions.” Which is it? The crab that I moved a rock in the middle of the creek yesterday to get a good look at was clearly responsive to sense impressions. It crab-walked (pardon) around my feet and hid under another rock. We would no more say that the crab is conscious of its sense impressions than that it walked like a human.

So the very first definition of sentience contains two very different meanings. The other two, “having or showing realization, perception, or knowledge; aware;” and “finely sensitive in perception and feeling,” are different things, and add to the confusion.

To be fair, AI geeks are referring to the second definition when they speak of LLMs having “sentience.” In becoming “digitally excited,” are thought machines are exhibiting “the machine equivalent of feelings?” That’s not just dubious and highly debatable; it strikes me as absurdly self-projective of human mental and emotional states.

More to the point, just because AI boys and girls are able to recursively program their programs so they’re able to cumulatively learn, and even learn from their mistakes (a trait that many humans have difficulty with), does not mean the machine is self-aware. Much less that AI is or will be “finely sensitive to perception and feeling,” presumably like Data on Star Trek.  

On the other hand, the thought machines we’re making in our own image are overthrowing something very fundamental to how we have conceived ourselves as humans.

As Douglas Hofstadter, an eminent cognitive scientist and AI skeptic who has written extensively and influentially on artificial intelligence recently said: “It’s a very traumatic experience when some of your most core beliefs about the world start collapsing. And especially when you think that human beings are soon going to be eclipsed.”

ChatGPT and its peers, Hofstadter exclaims, “Just renders humanity a very small phenomenon compared to something else that is far more intelligent and will become incomprehensible to us, as incomprehensible to us as we are to cockroaches.”

The basis is his flip-flop on the limitations of AI is that the LLMs can now do what smart humans do: “Putting your finger on the essence of a situation by ignoring vast amounts of information about the situation and summarizing the essence in a terse way.” If A.I. can do this kind of thinking, Hofstadter concludes, then it is developing consciousness.

That’s a very limited few of what it means to be a human being, much less what consciousness is or could be in the human being. What is “soon going to be eclipsed” is our hubristic cognitive capabilities, which have allowed us to dominate and decimate the planet. Good riddance.

Hofstadter makes his commonplace philosophy plain: “In my book, ‘I Am a Strange Loop,’ I tried to set forth what it is that really makes a self or a soul. I like to use the word ‘soul’, not in the religious sense, but as a synonym for ‘I’, a human ‘I,’ capital letter ‘I.’ So, what is it that makes a human being able to validly say ‘I’? What justifies the use of that word? When can a computer say ‘I’ and we feel that there is a genuine ‘I’ behind the scenes?

I would ask, why in God’s name privilege the ‘I,’ the self, and make it synonymous with soul? And how can anyone use the word soul in talking about human essence except in a religious sense?

What is the self, and does it have any reality apart from the memories, experience and images stored in thought? The ‘I’, the self, is a program, a contradictory and conflict-ridden operating system generated by thought to uphold human separateness and specialness. That’s what is in jeopardy with AGI.

The ‘I’ is indeed a “strange loop,” as Hofstadter says, and with its negation in fully conscious attention, the human brain transcends thought, and is no longer a prisoner of its separation, alienation and fragmentation.

What is consciousness when there is no thought and so no ‘I?’ It’s something that no thought machine, however convincing its simulation of self and consciousness, can ever touch.

Therefore if we are to remain human and flower as human beings, we will have to stop privileging “capital letter ‘I’” and small case consciousness, and start fully awakening insight and capital letter Consciousness.


Martin LeFevre is a contemplative, and non-academic religious and political philosopher. He welcomes dialogue.

Published with permission of the author. All copyright remains with the author.