The race to God-like AI and what it means for humanity – The Australian Financial Review

Lisa: For decades, theres been this fear about AI overtaking the world. Weve made films and series about machines becoming smarter than humans, and then trying to wipe us out. But theres particular debate and discussion now about the existential threat of AI. Why is everyone talking about it now?

John: Well, since late last year, we had ChatGPT sort of burst onto the scene. And then Googles Bard and Microsoft being quickly followed. And suddenly, millions of people, potentially billions of people in the world are exposed to AI directly in ways that they never have been before. And at the same time, weve got AI ethicists and AI experts who are saying, well, maybe this is happening too fast. Maybe we should step back a little and think about what is the downside? What are the risks of AI, because some of the risks of AI are pretty serious.

[In March, after OpenAI released the latest model of its chat bot, GPT, more than 1000 people from the tech industry, including billionaire Elon Musk and Apple co-founder Steve Wozniak, signed a letter calling for a moratorium on AI development.]

John: On the development of anything more powerful than the engine that was under ChatGPT, which is known as GPT four. And there was a lot of controversy about this. And in the end, there was no moratorium. And then in May ...

[Hundreds of artificial intelligence scientists and tech executives signed an open letter warning about the threat posed to humanity by artificial intelligence ChatGPT creators.]

Another group of AI leaders put their names to a one-sentence statement and the signatures on this statement included Sam Altman, the guy behind ChatGPT

[Altman: My worse fears are that we cause significant, we the field, the technology, the industry, cause significant harm to the world ]

John: And Geoffrey Hinton, who is often referred to as the godfather of AI.

[Hinton: I think there are things to be worried about. Theres all the normal things that everybody knows about, but theres another threat. Its rather different from those, which is if we produce things that are more intelligent than us, how do we know we can keep control?]

Lisa: Ive got that statement here. It was only one line and it read, mitigating the risk of extinction from AI should be a global priority, alongside other societal-scale risks, such as pandemics and nuclear war.

John: And the statement was deliberately pretty vague. It was designed to get people thinking but without giving you enough sort of detail that you could criticise it.

Like, we know that theres going to be another pandemic, and weve had threat of nuclear war hanging over us for a long time. We dont know for sure whether theres going to be human extinction. But its, its we dont know for sure, but that were going to have human extinction because of AI. But it is one of those things that could happen.

Well, arguably its already a threat. Theres the classic example of when Amazon was using an AI to vet resumes for job applicants.

And then they discovered that the AI was deducting points from peoples overall score if the word woman or women were in the resume.

[The glitch stemmed from the fact that Amazons computer models were trained by observing patterns in resumes of job candidates over a 10-year period, largely from men, in effect, teaching themselves that male candidates were preferable.]

So the data set that Amazon gave the AI to learn from, already contained those biases, is called misalignment where you think the AI is doing one thing, which is a fast and efficient job at wading through resumes, but its actually not doing quite the thing you asked for.

And theres another classic example of misalignment, theres a group of pharmaceutical researchers in 2020, and 2021, who were AI experts, theyve been using AI to generate pharmaceuticals for human good for some time. And they decided they were going to see what would happen if they turned that very same machine towards dangerous goals. They told the AI rather than avoid toxic compounds, invent some toxic compounds for me. And they read it for around six hours, I think. And in that time, the artificial intelligence came up with about 40,000 toxic compounds, not all of them when many of them were new. And one of them was almost identical to a toxic nerve agent known as VX, which is one of the most pernicious chemical warfare drugs there is. So that was 2021.

And theres been big improvements since then, as weve all seen with ChatGPT and Bard and things like that. So people are starting to wonder, what does the threat become when the artificial intelligence gets really smart when it becomes whats known as an artificial general intelligence which is, much like the human level intellect once it reaches the level of AGI. A lot of AI ethicists and AI researchers think that the risk is just going to get so much bigger.

Lisa: So for many computer scientists and researchers the question of AI becoming more intelligent than humans moving from lets get the acronyms right. AI artificial intelligence to AGI artificial general intelligence is one of when rather than if. So when is it? When is it expected to happen? How long have we got?

John: Well, theres actually two things that are going to happen down this pathway.

Theres the moving from where we are now to AGI. And then theres the moving from AGI, which is sort of human-level intelligence to God-level intelligence. And once it hits God AI level, or also known as superhuman machine intelligence SMI for another acronym once it gets there, thats when we really dont know what might happen. And thats when a lot of researchers think that human extinction might be on the cards. So the second phase, which is getting from AGI to SMI, that could actually happen very fast relative to the historic development of artificial intelligence. Theres this theory known as recursive self-improvement.

And it goes something like this, you build an AGI and artificial general intelligence. And one of the things that the AGI can do is build the next version of itself. And one of the things that the next version itself is very likely to be better at is building the next version of itself. So, you get into this virtuous, or vicious depending on your perspective, this computer cycle where its looping through and looping through, potentially very quickly.

And theres sort of a betting website. Its a forecasting website called Metaculus where they asked this question: they said, after a week, AGI is created, how many months? Will it be before the first super-intelligent Oracle appears? And the average answer from experts on Metaculus was 6.38 months.

So in that sense, the second phase of it is going to be quite fast, right? It could be quite fast. So the question is, how long will it take for us to get from where we are now ChatGPT to an AGI, to a human-level intelligence? Well, a lot of experts, including Geoffrey Hinton, the godfather of AI, used to think that would take around 30 to 50 to maybe 100 years to get from where we are now to an artificial general intelligence. But now, a lot of researchers are thinking it could be a lot faster than that. It could be two years or three years, or certainly by the end of the decade.

Lisa: Weve talked about how we got to this point, and whats coming next AI becoming as good at thinking as humans are and about how that might happen sooner than expected. So what are we so afraid of?

John: Well, its important to point out that not everyone is afraid of human extinction as the end result of AI. Theres a lot of good things to come from AI theres drug exploration in ways that weve never seen before. Artificial intelligence was used as part of the response to the pandemic they used AI to rapidly sequence the COVID-19 genome, theres a lot of upside to AI. So not everyones worried about human extinction. And even the people who are worried about AI risks, even theyre not all worried about extinction. A lot of people are more worried about the near-term risks, the discrimination, the potential that it could, that AI could, be used or generative AI in particular could be used for misinformation on a scale weve never seen before.

[Toby Walsh: Im the chief scientist at UNSWs new AI Institute. I think that its intelligent people who think too much highly of intelligence. Intelligence is not the problem. If I go to the university, its full of really intelligent people who lacked any political power at all.]

John: And he said, hes not worried that artificial intelligence is going to suddenly escape the box and get out of control in the way that it did in the movies.

[Toby Walsh: When ChatGPT is sitting there, waiting for you to type its prompt, its not thinking about taking over the planet. Its just waiting for you to type your next character. Its not plotting the takeover of humanity.]

John: He says that, unless we give artificial intelligence agency, it cant really do much.

[Toby Walsh: Intelligence itself is not harmful, but most of the harms you can think of the human behind them and AI is just a tool that amplifies what they can do.]

John: Its just a computer, its not sitting there wondering how can I take over the world? If you turn it off, you turn it off.

Lisa: But there are a growing number of experts who are worried that we wont be able to turn it off. So why is there so much anxiety now?

John: Youve got to keep in mind that Western culture has sort of mythologised the threat of artificial intelligence for a long time, and we need to untangle that, we need to figure out which are the real risks and which are the risks that have sort of just been the bogeyman since machines were invented.

Firstly, its important to remember that AI is not conscious in the way that we understand human consciousness, ChatGPT doesnt sit there waiting for you to type in keystrokes and think to itself it might just take over the world.

Theres this thought experiment thats been around in AI for a while: its called the paper-clip maximiser. And the experiment runs roughly along these lines: that you ask an AI to build an optimal system, thats going to make the maximum number of paper-clips and it seems like a pretty innocuous task. But the AI doesnt have a human ethics. Its just been given this one goal, and who knows what its going to do to achieve that one goal. And one of the things that it might do is kill all the humans. It might be that humans are using too many resources that could either go, otherwise go into paper-clips, or it might be that its worried that the humans see that its making too many paper-clips and it decides to actively kill humans.

Now, its just a thought experiment and no one really thinks that were literally going to be killed by a paper-clip maximiser but it sort of points out AI alignment or AI misalignment, where we give an AI a goal, and we think its achieving that goal. We think its setting out to achieve that goal that maybe it is, but we dont know, we dont really know how its going about that. Like the example of the resumes at Amazon. It was doing the simple task of vetting resumes, but it was doing it differently from how Amazon imagined it was. And so in the end, they had to switch it off.

So part of the concern is not so much about what the AI is capable of. But what are these big technology companies capable of? What are they going to do with the AI? Are they going to produce systems that can be used for wholesale misinformation?

Theres other concerns, and the other one is to do with a notion of agency. And one of the things about agency is that if the AI has got it, humans can be cut out of the decision-making process. Weve seen that with autonomous weapons and the ban on using AI in autonomous weapons. And there are, there are a lot of different ways for an AI to get agency. A big tech company could build an AI that they give more power than they ought to have. Or terrorists could seize control of an AI, or some sort of bad actor or anarchists or, or you name it. So weve got this range of threats that people perceive from AI. On the one hand, theres the very real threat that it will discriminate. And at the other end of the spectrum, theres the distant threat, that it might kill us all indiscriminately.

Lisa: John, how do we manage this existential threat? How do we ensure that we derive the benefits from AI and avoid this dystopian extreme?

John: Theres a lot of experts who are now calling for regulation. In fact, even a lot of the AI companies themselves, like OpenAI, have said that we need this to be regulated. Left to their own devices, its doubtful that AI companies can be trusted to always work in the best interests of humanity at large. Theres the profit motive going on. I mean, weve seen that already.

We saw Google, for instance, scramble to produce Bard even though six months prior to that day, it said we dont really want to release Bard because we dont think its particularly safe. But then ChatGPT came out. And Google thought they had to respond. And then Microsoft responded. So everyone has very quickly gone from being quite worried about how harmful these things could be to releasing them as an experiment, a very large experimental test on the whole of humanity. So a lot of people are saying, well, you know, maybe we shouldnt be doing that, maybe we should be sort of regulating the application of AI, maybe not have a moratorium on research into AI, but maybe stop the roll-out of these big language models, these big AIs, until we have a sense of what the risks are.

Theres an expert at the ANU, a woman named Professor Genevieve Bell. I spoke to her about this. And shes an anthropologist who has studied centuries of technological change. And she said to me that we always do manage to regulate systems we had we had the railway, we had electricity, and it can be messy. And it can it can take a while. But we always get there. And we always come up with some sort of regulatory framework that works for most people, and doesnt kill us all. And she thinks that we will come up with a regulatory framework for AI.

But her concern is that this time, it is a little different. Its happening at a scale and a speed that humanity has never seen before, that regulators have never seen before. And its an open question whether well be able to regulate it before the damage is done.

And of course, theres another difference, which is that when the railways were rolled out, or electricity was rolled out, or the internet was rolled out, or mobile phones or any of these big technical revolutions, the engineers kind of understood how these machines worked. But when it comes to AI, the engineers cant necessarily make the same claim they dont fully understand how AI works. It can be a bit of a black box.

Explore the big issues in business, markets and politics with the journalists who know the inside story. New episodes of The Fin are published every Thursday.

View original post here:

The race to God-like AI and what it means for humanity - The Australian Financial Review

Related Posts

Comments are closed.