So they're willing to surrender it to the bots.
Ezra Klein last week--
Since moving to the Bay Area in 2018, I have tried to spend time regularly with the people working on A.I. I don’t know that I can convey just how weird that culture is. And I don’t mean that dismissively; I mean it descriptively. It is a community that is living with an altered sense of time and consequence. They are creating a power that they do not understand at a pace they often cannot believe.
In a 2022 survey, A.I. experts were asked, “What probability do you put on human inability to control future advanced A.I. systems causing human extinction or similarly permanent and severe disempowerment of the human species?” The median reply was 10 percent.
I find that hard to fathom, even though I have spoken to many who put that probability even higher. Would you work on a technology you thought had a 10 percent chance of wiping out humanity?
We typically reach for science fiction stories when thinking about A.I. I’ve come to believe the apt metaphors lurk in fantasy novels and occult texts. As my colleague Ross Douthat wrote, this is an act of summoning. The coders casting these spells have no idea what will stumble through the portal. What is oddest, in my conversations with them, is that they speak of this freely. These are not naifs who believe their call can be heard only by angels. They believe they might summon demons. They are calling anyway.
I often ask them the same question: If you think calamity so possible, why do this at all? Different people have different things to say, but after a few pushes, I find they often answer from something that sounds like the A.I.’s perspective. Many — not all, but enough that I feel comfortable in this characterization — feel that they have a responsibility to usher this new form of intelligence into the world.
A tempting thought, at this moment, might be: These people are nuts.
That we are entrusting the human future to the jaw-dropping superficiality of these proto-sociopaths is not so much a reflection on them, as on the rest of us for letting them. We really don't have enough belief in humanity anymore to believe it's worth saving. Let the bots take over the evolution of consciousness. In the meanwhile, we'll keep fighting about the really important stuff like masks and CRT.
I missed this article in The NY Times by Robert Burton when it came out in 2017. It expands how we have to understand the problem.
If conventional psychology isn’t up to the task, perhaps we should step back and consider a tantalizing sci-fi alternative — that Trump doesn’t operate within conventional human cognitive constraints, but rather is a new life form, a rudimentary artificial intelligence-based learning machine. When we strip away all moral, ethical and ideological considerations from his decisions and see them strictly in the light of machine learning, his behavior makes perfect sense.
Consider how deep learning occurs in neural networks such as Google’s Deep Mind or IBM’s Deep Blue and Watson. In the beginning, each network analyzes a number of previously recorded games, and then, through trial and error, the network tests out various strategies. Connections for winning moves are enhanced; losing connections are pruned away. The network has no idea what it is doing or why one play is better than another. It isn’t saddled with any confounding principles such as what constitutes socially acceptable or unacceptable behavior or which decisions might result in negative downstream consequences.
Metaphorically, this process is reminiscent of Richard Dawkins’s notion of the selfish gene. The goal of DNA is self-reproduction; the sole intent of Deep Mind or Watson is to win. When Deep Mind beat the world’s best Go player, it did not consider the feelings of the loser or the potentially devastating effects of A.I. on future employment or personal identity. If any one quality could be ascribed to A.I. neural networks, it would be relentless “single-minded” self-interest....
As armchair psychologists, we have the gut feeling that with enough information and psychological savvy, we can figure out what makes Trump tick. Unfortunately there is no supporting evidence for this wishful thinking. Once we accept that Donald Trump represents a black-box, first-generation artificial-intelligence president driven solely by self-selected data and widely fluctuating criteria of success, we can get down to the really hard question confronting our collective future: Is there a way to affect changes in a machine devoid of the common features that bind humanity?
The issue is not that AI is becoming more human, but that humans are becoming more like AI.