The AI Capitalists Don’t Realize They’re About To Kill Capitalism – Worldcrunch

-Analysis-

BERLIN An open letter published by the Future of Life Institute at the end of March called for all labs working on artificial intelligence systems more powerful than GPT-4 to immediately pause their work for at least six months. The idea was that humanity should use this time to take stock of the risks posed by these advanced systems.

Thousands of people have already signed the letter, including big names such as Elon Musk, who is an advisor to the Future of Life Institute. The organization's stated aim is to reduce the existential risks to humankind posed by such technologies.

They claim the AI labs are locked in an out-of-control race to develop and deploy ever more powerful minds that no one not even their creators can understand, predict, or reliably control. Forbesmagazine wrote, In the near term, experts warn AI systems risk exacerbating existing bias and inequality, promoting misinformation, disrupting politics and the economy, and could help hackers. In the longer term, some experts warn AI may pose an existential risk to humanity and could wipe us out.

Although these warnings sound sensible, the fact that Elon Musks name is at the top of the list of signatories to the open letter is worrying enough. When Musk starts speaking about ethics and social responsibility, alarm bells start ringing.

We may remember his last big ethical intervention: his takeover of Twitter, to ensure that it remained a trustworthy platform for democracy.

So what has caused this sudden wave of panic? It is about control and regulation but control in whose hands? In the suggested six-month pause humankind can take stock of the risks but how? Who will represent humankind in this capacity? Will there be a global, public debate?

What about those IT labs that will (as we must expect) secretly continue their work, with the authorities turning a blind eye, not to mention what other countries outside of the West (China, India, Russia) will do? Under such conditions, a serious global debate with binding conclusions is unimaginable. What is really at stake here?

In his 2017 book Homo Deus, Israeli historian Yuval Noah Harari, who also signed the open letter, predicted that the most realistic outcome of developing true AI would be a radical division within human society, one that would be far more serious than the divisions imposed by class.

Harari predicted that, in the near future, biotechnology and computer algorithms would join forces to produce bodies, brains and heads, meaning that the gulf between those who knew how to construct these and those who didnt would widen dramatically: those who are driving forward progress would achieve godlike abilities of creation and destruction, while those left behind would struggle to survive.

The panic expressed in the open letter from the Future of Life Institute is motivated by a fear that those driving forward progress will no longer be able to control what they create in short, it is expressing our fear of our new, digital overlords.

Clearly what the open letter is aiming for is not a wide-ranging public debate it is more like cooperation between governments and companies. The threat posed by advanced AI is very serious, and it affects those in power and those currently involved in developing, owning and controlling AI.

The future waiting on the horizon is nothing less than the end of capitalism as we know it: the prospect of a self-reproducing AI system that requires less and less human involvement the explosion of automated trade on the stock exchange is the first step in this direction. So the true choice facing us is clear: a new form of communism or unchecked chaos, in which machines interact with us as pseudo-human partners.

Many lonely (and also not so lonely) people spend their evenings having long conversations with chatbots, talking about new films and books, debating political and ideological questions, and so on. Its not surprising that they find these exchanges relaxing and satisfying: to repeat an old joke of mine, what they get from this exchange is an AI version of decaffeinated coffee or a sugar-free drink a neighbor with no hidden motives, an Other who perfectly meets their needs.

There is a kind of fetishistic denial at work here: I know (that Im not talking to a real person), but still (it feels like I am, without any of the risks that arise from a conversation with a real person)! Therefore a chatbot is at its most disingenuous when it openly admits that it is just a machine for example, if you ask a chatbot which films it likes, the response will likely be an honest admission that it is a machine and has no individual preferences. I recently asked a chatbot: Should I be a communist? The answer I received was:

As an AI chatbot I have no personal opinions or convictions, and I cant make decisions for you. In the end, the decision to embrace a particular political ideology, for example communism, is a personal choice that should be made based on carefully weighing the principles, values and possible outcomes of the ideology.

Why is that a lie? The content of the answer is true, but it is a lie because of the implied position: a chatbot speaking as if it were a real person, in that it openly admits its limitations and the fact that it is not, in fact, a real person. This mystification is the opposite of the fetishistic denial: the AI system is not denying anything, it is simply saying, I know that Im not a real person!, without a but still because the but still is the very fact that it is speaking (and thereby imitating subjectivity).

On closer reading, it is easy to see that the attempts to take stock of the threats posed by AI will tend to repeat the old paradox of forbidding the impossible: a true post-human AI is impossible, therefore we must forbid anyone from developing one To find a path through this chaos, we should look to Lenins much-quoted question: Freedom for whom, to do what? In what way were we free until now? Were we not being controlled to a far greater extent than we realized?

Instead of simply complaining about the threat to our freedom and intrinsic value, we should also consider what freedom means and how it may change. As long as we refuse to do that, we will behave like hysterics, who (according to French psychoanalyst Jacques Lacan) seek a master to rule over them. Is that not the secret hope that recent technologies awaken within us?

The post-humanist Ray Kurzweil predicts that the exponential growth of the capabilities of digital machines will soon mean that we will be faced with machines that not only show all the signs of consciousness but also far surpass human intelligence.

We should not confuse this post-human view with the modern belief in the possibility of having total technological control over nature. What we are experiencing today is a dialectical reversal: the rallying cry of todays post-human science is no longer mastery, but surprising (contingent, unplanned) emergence.

The philosopher and engineer Jean-Pierre Dupuy, writing many years ago in the French journal Le Dbat, described a strange reversal of the traditional Cartesian-anthropocentric arrogance that underpinned human technology, a reversal that can clearly be seen in the fields of robotics, genetics, nanotechnology, artificial life and AI research today:

How can we explain the fact that science has become such a risky activity that, according to some top scientists, today it represents the greatest threat to the survival of humankind? Some philosophers respond to this question by saying that Descartes dream of being lord and master of nature has been proven false and that we should urgently return to mastering the master. They have understood nothing. They dont see that the technology waiting on the horizon, which will be created by the convergence of all disciplines, aims precisely for a lack of mastery.

The engineer of tomorrow will not become a sorcerers apprentice due to carelessness or ignorance, but of his own free will. He will create complex structures and try to learn what they are capable of, by studying their functional qualities an approach that works from the bottom up. He will be a discoverer and experimenter, at least as much as a finisher. His success will be measured by how far his own creations surprise him, rather than by how closely they conform to the list of aims set out at the start.

Even if the outcome cannot be reliably predicted, one thing is clear: If something like post-humanity truly comes to pass, then all three fixed points in our worldview (man, God, nature) will disappear. Our humanity can only exist against the backdrop of inscrutable nature, and if thanks to biogenetics life becomes something that can be manipulated by technology, human life and the natural world will lose their natural character.

And the same goes for God: what people have understood as God (in historically specific forms) only has meaning from the perspective of human finiteness and mortality. God is the opposite of earthly finiteness, and as soon as we become homo deus and achieve characteristics that, from our old human perspective, seem supernatural (such as direct communication with other conscious beings or with AI), that is the end of gods as we know them.

From Your Site Articles

Related Articles Around the Web

See the original post:
The AI Capitalists Don't Realize They're About To Kill Capitalism - Worldcrunch

Related Posts

Comments are closed.