After posting yesterday, It came across this Thomas Edsall column from May in which he quotes Stiglitz and Korinek:
In their December 2017 paper, “Artificial intelligence, worker-replacing technological progress and income distribution,” the economists Anton Korinek, of the University of Virginia, and Joseph E. Stiglitz, of Columbia — describe the potential of artificial intelligence to create a high-tech dystopian future.
Korinek and Stiglitz argue that without radical reform of tax and redistribution politics, a “Malthusian destiny” of widespread technological unemployment and poverty may ensue.
Humans, they write, “are able to apply their intelligence across a wide range of domains. This capacity is termed general intelligence. If A.I. reaches and surpasses human levels of general intelligence, a set of radically different considerations apply.” That moment, according to “the median estimate in the A.I. expert community is around 2040 to 2050.”
Once parity with the general intelligence of human beings is reached, they continue, “there is broad agreement that A.I. would soon after become super‐intelligent, i.e., more intelligent than humans, since technological progress would likely accelerate.”
Without extraordinary interventions, Korinek and Stiglitz foresee two scenarios, both of which could have disastrous consequences:
In the first, “man and machine will merge, i.e., that humans will ‘enhance’ themselves with ever more advanced technology so that their physical and mental capabilities are increasingly determined by the state of the art in technology and A.I. rather than by traditional human biology.”
Unchecked, this “will lead to massive increases in human inequality,” they write, because intelligence is not distributed equally among humans and “if intelligence becomes a matter of ability‐to‐pay, it is conceivable that the wealthiest (enhanced) humans will become orders of magnitude more productive — ‘more intelligent’ — than the unenhanced, leaving the majority of the population further and further behind.”
In the second scenario, “artificially intelligent entities will develop separately from humans, with their own objectives and behavior, aided by the intelligent machines.”In that case, they write, “there are two types of entities, unenhanced humans and A.I. entities, which are in a Malthusian race and differ — potentially starkly — in how they are affected by technological progress.”
In this hypothetical race, “A.I. entities are becoming more and more efficient in the production of output compared to humans,” the authors write, because “human technology to convert consumption goods such as food and housing into future humans has experienced relatively little technological change.” By contrast, “the reproduction technology of A.I. entities — to convert A.I. consumption goods such as energy, silicon, aluminum into future A.I. — is subject to exponential progress.”
Evolution without grace, i.e., evolution driven by the logic of the will to power and greed, has no particular interest in human beings, not even as much as the Mongol horde was interested in the thousands of humans they massacred or enslaved in their manic project to expand their power and control. Evolution without grace is the default and can be resisted only by moral actors who are grounded in something that transcends it. Evolution without grace only cares about expansion, and if that means that humans are no longer needed at the leading edge its expansion project and machines will serve evolution better, then what reason have we to think that hyper-intelligent machines won't take over?
I don't think either of these two scenarios are inevitable, but because they have become the fodder for so many Hollywood dystopian entertainments, we tend to dismiss such concerns as unlikely. But what argument is there against all the evidence that supports that this is where unchecked technocapitalism is headed? And what argument is either of these two scenarios is more of a likelihood than not if there is no intervention to insure that machines serve human purposes rather than the reverse. Do we just shrug and say, "Well, it will all work out one way or the other."
How can such an intervention be marshaled if there is no consensus about what human purposes are? Certainly no intervention can be marshaled from within the hegemonic rationalist materialist metaphysical imaginary, because it's precisely that imaginary that supports the values presuppositions of Social Darwinism, i.e., evolution without grace. The point I'm trying to make here is this: These scenarios are, if not inevitable, extremely likely unless there's an intervention. But from where shall it come? Who on the scene today has the moral authority to say No to technocapitalism and the power to enforce such a No?
Our situation vis a vis AI is very much like the one depicted in Don't Look Up. In that movie, the impending disaster was a comet heading toward the earth that is a MacGuffin for what could be any potential human extinction event from eco-catastrophe to the rise of the machines. The most important insight around which the premise of the film is based is that even when confronted with extinction, humans are so fragmented, so distracted and so morally feckless that they are incapable of uniting around an obvious human value--survival. The logic of technocapitalism is the default, and that logic, left unchecked, leads to the extinction of the human project. Only a few sociopaths escape to despoil a planet elsewhere.
So as long as the nihilistic sociopathic logic of technocapitalism and its rationalist materialist metaphysical imaginary plays such an outsized role in shaping our moral and material priorities, what reason have we to think that any kind of resistance to its logic can be mounted? What reason have we to think that any good can come of just letting that logic play out? And where within the rationalist materialist imaginary can we get a foothold to push back against this logic?
On Sunday in a post entitled "The Coming Discontinuity" I wrote:
But whatever is going to happen probably has nothing to do with what we expect or with what we want or don't want. So perhaps, for precisely that reason, we should entertain the possibility that it might be smarter to think about the future with imageless hope. That's the attitude of a true wanderer who wonders in the wilderness.
What I've written here yesterday and today is not meant to contradict that but rather to put us on alert that an attitude of hope requires active vigilance. It does not justify a lazy complacency that whatever happens happens. Imageless hope for me is rather like Keats's 'negative capability', i.e., a way of being attentive to the world with a kind of radical openness that expects to be surprised. But it's quite possible that we will miss what's there that might actually be a solution if we're not paying attention. And maybe more than we can understand now depends on our not missing it.