Martin LeFevre: In the end, as in the beginning of new cultures or technologies, philosophy matters most. The philosophical conflict in the murky world of AI development, over the so-called doomers versus the ‘accelerationists’/commercializers, has spilled out onto the boardroom floors. What does it have to do with the rest of us?
The sheer pettiness, power plays and internecine warfare at ChatGPT over the firing of “the biggest figure in A.I. today,” Sam Altman, make a mockery of Altman’s pollyannaish view of the future of humanity with AI.
In one sentence, America’s newspaper of record claims Altman’s “views remain as relevant as ever” despite his termination by the board at ChatGPT. In the next sentence Altman himself reveals himself and the entire ‘accelerationist’ mindset by admitting, “Frankly, it has been such a busy year. There has not been a ton of time for reflection.”
Well, frankly, without more reflection than in a newspaper interview, the future with AI will be the same as the past without it, only much, much darker.
Altman speaks of “how technology and society are going to co-evolve,” as if it is a given that society has ‘evolved’ (a way overused and misused word) as technology has developed, and will continue to do so into a brilliant future for humanity. Like other big lies that other Americans have come to robotically accept as truth, that utterance simply doesn’t hold up to any serious reflection and examination.
Even as policymakers are seriously entertaining proposals to use AI to ‘augment’ if not replace teachers in educating young people, Altman is giddy with the prospects for profits: “We’re encouraging our teachers to make use of ChatGPT in the classroom. We’re encouraging our students to get really good at this tool, because it’s going to be part of the way people live.”
After all, AI’s as boy wonder has pronounced, “the benefits far outweigh the downside.”
Though foolish people believe that AI will solve the ecological crisis, end war, and erase the Grand Canyon-wide chasm between rich and poor in the world, it will not and cannot do any of those things, but make all of them worse.
Out of one side of his arse Altman proclaims that AGI, Artificial General Intelligence,” is “a ridiculous and meaningless term.” Out of the other side of his rear he says, “I apologize that I keep using it.”
In other words, he has perfected the American trait of having things both ways, and is smart enough to apologize for it even as he keeps doing it.
Altman is a laissez-faire utilitarian, and utters reasonable sounding things like, “Most of the world just cares whether this thing, AI, is useful to them or not. And we currently have systems that are somewhat useful, clearly…. It’s nice to do useful stuff for people.”
The ironic thing, which never crosses an unreflective mind such as Altman’s, is that utilitarianism, when it comes to AI, is even more meaningless than “Artificial General Intelligence.”
The state of play of Artificial Thought technology is, as Altman says, “they’re bad at is reasoning,” even as it’s “vastly superhuman in terms of its world knowledge, because it knows more than any human has ever known.”
Philosophical insight is urgently needed, not because Western philosophy made reason its cornerstone, but because asking the right questions and following the insights that proceed from them in a logical way is indispensible to using AI wisely.
Altman insists that “society and the technology of AI are evolving together,” and that “people are using it where appropriate and where it’s helpful and not using it where you shouldn’t.”
What’s actually happening is the acceleration of the dumbing down of the human mind and the numbing down of the human heart.
Human intelligence, which flows from insight rather than knowledge, and human compassion, which flows from the growth of the heart not the ideal of empathy, have not expanded but shrunk as technology has expanded. And AI is accelerating the disconnect between the increase of technology and a harmonious global civilization.
Artificial Thought will soon surpass the human mind. But the real and present danger is to the human brain. Without a deep and abiding insight into the nature and limitation of thought, the brain will be merged with AI. That will not enhance the brain’s capacity for insight, but destroy it. It will not enable the human heart to grow, but will shrivel and kill it.
This is not some ‘doomer’ scenario. ‘Accelerationist’ commercializers, epitomized by Sam Altman, are obliterating the human brain’s capacity for insight, stillness and awareness.
A widespread realization of the human brain’s capacity for insight, which Artificial Thought (Artificial Intelligence is a misnomer) will never have, is urgently needed.
‘Pauses’ in AI’s development, as well as regulation, won’t prevent its destructiveness without insight into higher thought. There must be the space and catalyst for the human brain and the human being’s realization of our true potential.
Interview with Sam Altman:
Martin LeFevre is a contemplative, and non-academic religious and political philosopher. He welcomes dialogue. email@example.com
Published with permission of the author. All copyright remains with the author.