Archive for the ‘Artificial General Intelligence’ Category

The race to God-like AI and what it means for humanity – The Australian Financial Review

Lisa: For decades, theres been this fear about AI overtaking the world. Weve made films and series about machines becoming smarter than humans, and then trying to wipe us out. But theres particular debate and discussion now about the existential threat of AI. Why is everyone talking about it now?

John: Well, since late last year, we had ChatGPT sort of burst onto the scene. And then Googles Bard and Microsoft being quickly followed. And suddenly, millions of people, potentially billions of people in the world are exposed to AI directly in ways that they never have been before. And at the same time, weve got AI ethicists and AI experts who are saying, well, maybe this is happening too fast. Maybe we should step back a little and think about what is the downside? What are the risks of AI, because some of the risks of AI are pretty serious.

[In March, after OpenAI released the latest model of its chat bot, GPT, more than 1000 people from the tech industry, including billionaire Elon Musk and Apple co-founder Steve Wozniak, signed a letter calling for a moratorium on AI development.]

John: On the development of anything more powerful than the engine that was under ChatGPT, which is known as GPT four. And there was a lot of controversy about this. And in the end, there was no moratorium. And then in May ...

[Hundreds of artificial intelligence scientists and tech executives signed an open letter warning about the threat posed to humanity by artificial intelligence ChatGPT creators.]

Another group of AI leaders put their names to a one-sentence statement and the signatures on this statement included Sam Altman, the guy behind ChatGPT

[Altman: My worse fears are that we cause significant, we the field, the technology, the industry, cause significant harm to the world ]

John: And Geoffrey Hinton, who is often referred to as the godfather of AI.

[Hinton: I think there are things to be worried about. Theres all the normal things that everybody knows about, but theres another threat. Its rather different from those, which is if we produce things that are more intelligent than us, how do we know we can keep control?]

Lisa: Ive got that statement here. It was only one line and it read, mitigating the risk of extinction from AI should be a global priority, alongside other societal-scale risks, such as pandemics and nuclear war.

John: And the statement was deliberately pretty vague. It was designed to get people thinking but without giving you enough sort of detail that you could criticise it.

Like, we know that theres going to be another pandemic, and weve had threat of nuclear war hanging over us for a long time. We dont know for sure whether theres going to be human extinction. But its, its we dont know for sure, but that were going to have human extinction because of AI. But it is one of those things that could happen.

Well, arguably its already a threat. Theres the classic example of when Amazon was using an AI to vet resumes for job applicants.

And then they discovered that the AI was deducting points from peoples overall score if the word woman or women were in the resume.

[The glitch stemmed from the fact that Amazons computer models were trained by observing patterns in resumes of job candidates over a 10-year period, largely from men, in effect, teaching themselves that male candidates were preferable.]

So the data set that Amazon gave the AI to learn from, already contained those biases, is called misalignment where you think the AI is doing one thing, which is a fast and efficient job at wading through resumes, but its actually not doing quite the thing you asked for.

And theres another classic example of misalignment, theres a group of pharmaceutical researchers in 2020, and 2021, who were AI experts, theyve been using AI to generate pharmaceuticals for human good for some time. And they decided they were going to see what would happen if they turned that very same machine towards dangerous goals. They told the AI rather than avoid toxic compounds, invent some toxic compounds for me. And they read it for around six hours, I think. And in that time, the artificial intelligence came up with about 40,000 toxic compounds, not all of them when many of them were new. And one of them was almost identical to a toxic nerve agent known as VX, which is one of the most pernicious chemical warfare drugs there is. So that was 2021.

And theres been big improvements since then, as weve all seen with ChatGPT and Bard and things like that. So people are starting to wonder, what does the threat become when the artificial intelligence gets really smart when it becomes whats known as an artificial general intelligence which is, much like the human level intellect once it reaches the level of AGI. A lot of AI ethicists and AI researchers think that the risk is just going to get so much bigger.

Lisa: So for many computer scientists and researchers the question of AI becoming more intelligent than humans moving from lets get the acronyms right. AI artificial intelligence to AGI artificial general intelligence is one of when rather than if. So when is it? When is it expected to happen? How long have we got?

John: Well, theres actually two things that are going to happen down this pathway.

Theres the moving from where we are now to AGI. And then theres the moving from AGI, which is sort of human-level intelligence to God-level intelligence. And once it hits God AI level, or also known as superhuman machine intelligence SMI for another acronym once it gets there, thats when we really dont know what might happen. And thats when a lot of researchers think that human extinction might be on the cards. So the second phase, which is getting from AGI to SMI, that could actually happen very fast relative to the historic development of artificial intelligence. Theres this theory known as recursive self-improvement.

And it goes something like this, you build an AGI and artificial general intelligence. And one of the things that the AGI can do is build the next version of itself. And one of the things that the next version itself is very likely to be better at is building the next version of itself. So, you get into this virtuous, or vicious depending on your perspective, this computer cycle where its looping through and looping through, potentially very quickly.

And theres sort of a betting website. Its a forecasting website called Metaculus where they asked this question: they said, after a week, AGI is created, how many months? Will it be before the first super-intelligent Oracle appears? And the average answer from experts on Metaculus was 6.38 months.

So in that sense, the second phase of it is going to be quite fast, right? It could be quite fast. So the question is, how long will it take for us to get from where we are now ChatGPT to an AGI, to a human-level intelligence? Well, a lot of experts, including Geoffrey Hinton, the godfather of AI, used to think that would take around 30 to 50 to maybe 100 years to get from where we are now to an artificial general intelligence. But now, a lot of researchers are thinking it could be a lot faster than that. It could be two years or three years, or certainly by the end of the decade.

Lisa: Weve talked about how we got to this point, and whats coming next AI becoming as good at thinking as humans are and about how that might happen sooner than expected. So what are we so afraid of?

John: Well, its important to point out that not everyone is afraid of human extinction as the end result of AI. Theres a lot of good things to come from AI theres drug exploration in ways that weve never seen before. Artificial intelligence was used as part of the response to the pandemic they used AI to rapidly sequence the COVID-19 genome, theres a lot of upside to AI. So not everyones worried about human extinction. And even the people who are worried about AI risks, even theyre not all worried about extinction. A lot of people are more worried about the near-term risks, the discrimination, the potential that it could, that AI could, be used or generative AI in particular could be used for misinformation on a scale weve never seen before.

[Toby Walsh: Im the chief scientist at UNSWs new AI Institute. I think that its intelligent people who think too much highly of intelligence. Intelligence is not the problem. If I go to the university, its full of really intelligent people who lacked any political power at all.]

John: And he said, hes not worried that artificial intelligence is going to suddenly escape the box and get out of control in the way that it did in the movies.

[Toby Walsh: When ChatGPT is sitting there, waiting for you to type its prompt, its not thinking about taking over the planet. Its just waiting for you to type your next character. Its not plotting the takeover of humanity.]

John: He says that, unless we give artificial intelligence agency, it cant really do much.

[Toby Walsh: Intelligence itself is not harmful, but most of the harms you can think of the human behind them and AI is just a tool that amplifies what they can do.]

John: Its just a computer, its not sitting there wondering how can I take over the world? If you turn it off, you turn it off.

Lisa: But there are a growing number of experts who are worried that we wont be able to turn it off. So why is there so much anxiety now?

John: Youve got to keep in mind that Western culture has sort of mythologised the threat of artificial intelligence for a long time, and we need to untangle that, we need to figure out which are the real risks and which are the risks that have sort of just been the bogeyman since machines were invented.

Firstly, its important to remember that AI is not conscious in the way that we understand human consciousness, ChatGPT doesnt sit there waiting for you to type in keystrokes and think to itself it might just take over the world.

Theres this thought experiment thats been around in AI for a while: its called the paper-clip maximiser. And the experiment runs roughly along these lines: that you ask an AI to build an optimal system, thats going to make the maximum number of paper-clips and it seems like a pretty innocuous task. But the AI doesnt have a human ethics. Its just been given this one goal, and who knows what its going to do to achieve that one goal. And one of the things that it might do is kill all the humans. It might be that humans are using too many resources that could either go, otherwise go into paper-clips, or it might be that its worried that the humans see that its making too many paper-clips and it decides to actively kill humans.

Now, its just a thought experiment and no one really thinks that were literally going to be killed by a paper-clip maximiser but it sort of points out AI alignment or AI misalignment, where we give an AI a goal, and we think its achieving that goal. We think its setting out to achieve that goal that maybe it is, but we dont know, we dont really know how its going about that. Like the example of the resumes at Amazon. It was doing the simple task of vetting resumes, but it was doing it differently from how Amazon imagined it was. And so in the end, they had to switch it off.

So part of the concern is not so much about what the AI is capable of. But what are these big technology companies capable of? What are they going to do with the AI? Are they going to produce systems that can be used for wholesale misinformation?

Theres other concerns, and the other one is to do with a notion of agency. And one of the things about agency is that if the AI has got it, humans can be cut out of the decision-making process. Weve seen that with autonomous weapons and the ban on using AI in autonomous weapons. And there are, there are a lot of different ways for an AI to get agency. A big tech company could build an AI that they give more power than they ought to have. Or terrorists could seize control of an AI, or some sort of bad actor or anarchists or, or you name it. So weve got this range of threats that people perceive from AI. On the one hand, theres the very real threat that it will discriminate. And at the other end of the spectrum, theres the distant threat, that it might kill us all indiscriminately.

Lisa: John, how do we manage this existential threat? How do we ensure that we derive the benefits from AI and avoid this dystopian extreme?

John: Theres a lot of experts who are now calling for regulation. In fact, even a lot of the AI companies themselves, like OpenAI, have said that we need this to be regulated. Left to their own devices, its doubtful that AI companies can be trusted to always work in the best interests of humanity at large. Theres the profit motive going on. I mean, weve seen that already.

We saw Google, for instance, scramble to produce Bard even though six months prior to that day, it said we dont really want to release Bard because we dont think its particularly safe. But then ChatGPT came out. And Google thought they had to respond. And then Microsoft responded. So everyone has very quickly gone from being quite worried about how harmful these things could be to releasing them as an experiment, a very large experimental test on the whole of humanity. So a lot of people are saying, well, you know, maybe we shouldnt be doing that, maybe we should be sort of regulating the application of AI, maybe not have a moratorium on research into AI, but maybe stop the roll-out of these big language models, these big AIs, until we have a sense of what the risks are.

Theres an expert at the ANU, a woman named Professor Genevieve Bell. I spoke to her about this. And shes an anthropologist who has studied centuries of technological change. And she said to me that we always do manage to regulate systems we had we had the railway, we had electricity, and it can be messy. And it can it can take a while. But we always get there. And we always come up with some sort of regulatory framework that works for most people, and doesnt kill us all. And she thinks that we will come up with a regulatory framework for AI.

But her concern is that this time, it is a little different. Its happening at a scale and a speed that humanity has never seen before, that regulators have never seen before. And its an open question whether well be able to regulate it before the damage is done.

And of course, theres another difference, which is that when the railways were rolled out, or electricity was rolled out, or the internet was rolled out, or mobile phones or any of these big technical revolutions, the engineers kind of understood how these machines worked. But when it comes to AI, the engineers cant necessarily make the same claim they dont fully understand how AI works. It can be a bit of a black box.

Explore the big issues in business, markets and politics with the journalists who know the inside story. New episodes of The Fin are published every Thursday.

View original post here:

The race to God-like AI and what it means for humanity - The Australian Financial Review

Will AI soon be as smart as or smarter than humans? – Yahoo News

The 360 shows you diverse perspectives on the days top stories and debates.

At an Air Force Academy commencement address earlier this month, President Biden issued his most direct warning to date about the power of artificial intelligence, predicting that the technology could overtake human thinking in the not-so-distant future.

Its not going to be easy, Biden said, citing a recent Oval Office meeting with eight leading scientists in the area of AI.

Weve got a lot to deal with, he continued.An incredible opportunity, but a lot [to] deal with.

To any civilian who has toyed around with OpenAIs ChatGPT-4 or Microsofts Bing, or Googles Bard the presidents stark forecast probably sounded more like science fiction than actual science.

Sure, the latest round of generative AI chatbots are neat, a skeptic might say. They can help you plan a family vacation, rehearse challenging real-life conversations, summarize dense academic papers and explain fractional reserve banking at a high school level.

But overtake human thinking? Thats a leap.

In recent weeks, however, some of the worlds most prominent AI experts people who know a lot more about the subject than, say, Biden have started to sound the alarm about what comes next.

Today, the technology powering ChatGPT is whats known as a large language model (LLM). Trained to recognize patterns in mind-boggling amounts of text the majority of everything on the internet these systems process any sequence of words theyre given and predict which words come next. Theyre a cutting-edge example of artificial intelligence: a model created to solve a specific problem or provide a particular service. In this case, LLMs are learning how to chat better but they cant learn other tasks.

Or can they?

For decades, researchers have theorized about a higher form of machine learning known as artificial general intelligence, or AGI: software thats capable of learning any task or subject. Also called strong AI, AGI is shorthand for a machine that can do whatever the human brain can do.

Story continues

In March, a group of Microsoft computer scientists published a 155-page research paper claiming that one of their new experimental AI systems was exhibiting sparks of artificial general intelligence. How else (as the New York Times recently paraphrased their conclusion) to explain the way it kept coming up with humanlike answers and ideas that werent programmed into it?

In April, computer scientist Geoffrey Hinton a neural network pioneer known as one of the Godfathers of AI quit his job at Google so he could speak freely about the dangers of AGI.

And in May, a group of industry leaders (including Hinton) released a one-sentence statement warning that AGI could represent an existential threat to humanity on par with pandemics and nuclear war if we don't ensure that its objectives align with ours.

The idea that this stuff could actually get smarter than people a few people believed that, Hinton told the New York Times. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.

Each of these doomsaying moments has been controversial, of course. (More on that in a minute.) But together theyve amplified one of the tech worlds deepest debates: Are machines that can outthink the human brain impossible or inevitable? And could we actually be a lot closer to opening Pandoras box than most people realize?

There are two reasons that concerns about AGI have become more plausible and pressing all of a sudden.

The first is the unexpected speed of recent AI advances. Look at how it was five years ago and how it is now, Hinton told the New York Times. Take the difference and propagate it forwards. Thats scary.

The second is uncertainty. When CNN asked Stuart Russell a computer science professor at the University of California, Berkeley and co-author of Artificial Intelligence: A Modern Approach to explain the inner workings of todays LLMs, he couldnt.

That sounds weird, Russell admitted, because I can tell you how to make one. But how they work, we dont know. We dont know if they know things. We dont know if they reason; we dont know if they have their own internal goals that theyve learned or what they might be.

And that, in turn, means no one has any real idea where AI goes from here. Many researchers believe that AI will tip over into AGI at some point. Some think AGI wont arrive for a long time, if ever, and that overhyping it distracts from more immediate issues, like AI-fueled misinformation or job loss. Others suspect that this evolution may already be taking place. And a smaller group fears that it could escalate exponentially. As the New Yorker recently explained, a computer system [that] can write code as ChatGPT already can ... might eventually learn to improve itself over and over again until computing technology reaches whats known as the singularity: a point at which it escapes our control.

My confidence that this wasnt coming for quite a while has been shaken by the realization that biological intelligence and digital intelligence are very different, and digital intelligence is probably much better at certain things, Hinton recently told the Guardian. He then predicted that true AGI is about five to 20 years away.

Ive got huge uncertainty at present, Hinton added. But I wouldnt rule out a year or two. And I still wouldnt rule out 100 years. ... I think people who are confident in this situation are crazy.

Todays AI just isnt agile enough to approximate human intelligence

AI is making progress synthetic images look more and more realistic, and speech recognition can often work in noisy environments but we are still likely decades away from general-purpose, human-level AI that can understand the true meanings of articles and videos or deal with unexpected obstacles and interruptions. The field is stuck on precisely the same challenges that academic scientists (including myself) have been pointing out for years: getting AI to be reliable and getting it to cope with unusual circumstances. Gary Marcus, Scientific American

New chatbots are impressive, but they havent changed the game

Superintelligent AIs are in our future. ... Once developers can generalize a learning algorithm and run it at the speed of a computer an accomplishment that could be a decade away or a century away well have an incredibly powerful AGI. It will be able to do everything that a human brain can, but without any practical limits on the size of its memory or the speed at which it operates. ... [Regardless,] none of the breakthroughs of the past few months have moved us substantially closer to strong AI. Artificial intelligence still doesnt control the physical world and cant establish its own goals. Bill Gates, GatesNotes

Theres nothing biological brains can do that their digital counterparts wont be able to replicate (eventually)

Im often told that AGI and superintelligence wont happen because its impossible: human-level Intelligence is something mysterious that can only exist in brains. Such carbon chauvinism ignores a core insight from the AI revolution: that intelligence is all about information processing, and it doesnt matter whether the information is processed by carbon atoms in brains or by silicon atoms in computers. AI has been relentlessly overtaking humans on task after task, and I invite carbon chauvinists to stop moving the goal posts and publicly predict which tasks AI will never be able to do. Max Tegmark, Time

The biggest and most dangerous turning point will come if and when AGI starts to rewrite its own code

Once AI can improve itself, which may be not more than a few years away, and could in fact already be here now, we have no way of knowing what the AI will do or how we can control it. This is because superintelligent AI (which by definition can surpass humans in a broad range of activities) will and this is what I worry about the most be able to run circles around programmers and any other human by manipulating humans to do its will; it will also have the capacity to act in the virtual world through its electronic connections, and to act in the physical world through robot bodies. Tamlyn Hunt, Scientific American

Actually, it will be much harder for AGI to trigger the singularity than doomers think

Computer hardware and software are the latest cognitive technologies, and they are powerful aids to innovation, but they cant generate a technological explosion by themselves. You need people to do that, and the more the better. Giving better hardware and software to one smart individual is helpful, but the real benefits come when everyone has them. Our current technological explosion is a result of billions of people using those cognitive tools. Could A.I. programs take the place of those humans, so that an explosion occurs in the digital realm faster than it does in ours? Possibly, but ... the strategy most likely to succeed would be essentially to duplicate all of human civilization in software, with eight billion human-equivalent A.I.s going about their business. [And] were a long way off from being able to create a single human-equivalent A.I., let alone billions of them. Ted Chiang, the New Yorker

Maybe AGI is already here if we think more broadly about what general intelligence might mean

These days my viewpoint is that this is AGI, in that it is a kind of intelligence and it is general but we have to be a little bit less, you know, hysterical about what AGI means. ... Were getting this tremendous amount of raw intelligence without it necessarily coming with an ego-viewpoint, goals, or a sense of coherent self. That, to me, is just fascinating. Noah Goodman, associate professor of psychology, computer science and linguistics at Stanford University, to Wired

Ultimately, we may never agree on what AGI is or when weve achieved it

It really is a philosophical question. So, in some ways, its a very hard time to be in this field, because were a scientific field. ... Its very unlikely to be a single event where we check it off and say, AGI achieved. Sara Hooker, leader of a research lab that focuses on machine learning, to Wired

Original post:

Will AI soon be as smart as or smarter than humans? - Yahoo News

Flawed chatbot or threat to society? Both? We explore the risks and … – NPR

TERRY GROSS, HOST:

This is FRESH AIR. I'm Terry Gross. Hundreds of leaders in the field of artificial intelligence recently signed an open letter warning that artificial intelligence could destroy humanity. The letter said mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. We're going to talk about some of the risks of AI, from the battlefield to politics, as well as some of the benefits. And we'll consider the example of the AI innovation ChatGPT.

It's like a search engine in that it's been programmed with unfathomable amounts of information and can access the information immediately. But unlike a search engine that connects you to a website or quotes from them, ChatGPT can actually answer your questions in a more personalized, conversational-sounding way and carry on a written conversation with you. But it's not always going to be accurate. And sometimes it's going to be downright weird. My guest, Cade Metz, is a tech reporter for The New York Times who's been writing about AI for years and is the author of the 2021 book "Genius Makers: The Mavericks Who Brought AI To Google, Facebook, And The World."

Cade Metz, welcome back to FRESH AIR. It's a pleasure to have you back on the show. I know everybody's doing these goofy things with ChatGPT. And, you know, guilty - I'm doing them, too. So I asked if Cade Metz would be a good guest on FRESH AIR to talk about AI. And the answer I got was, yes, Cade Metz would be a great person to discuss AI on the radio program FRESH AIR.

And then it goes on to describe your extensive experience and says, Metz's expertise lies in conveying complex technical concepts in a way that is accessible to a broader audience. His insights into the advancements, challenges and ethical considerations surrounding AI could make for an engaging and informative discussion on FRESH AIR. Additionally, his knowledge of the key players and industry trends would provide valuable context for listeners interested in AI and its implications. Now, what I'm wondering, Cade, is do you recognize where that language comes from? Is that lifted from a bio on your website or a quote about you? Do you recognize those sentences?

CADE METZ: I don't recognize them at all. And it's a good way to think about how this technology works. It's not necessarily lifting exact language from the internet. What this technology has done, the way it is built, is that researchers, scientists at places like Google or the San Francisco AI lab OpenAI will take vast amounts of text from the internet, and they'll feed it into these systems. And it analyzes all that text. And it looks for patterns in the text. And in identifying those patterns, it then learns to generate new language on its own.

So in the moment, it's creating something new based on what it has learned in the past. So that might be similar to something that's on the internet, but probably a little different. And it can get things wrong, as you said. Now, I'll take the compliment from ChatGPT. All that sounded good and accurate. But there's always the chance that you could run the same query and it would give you something different that is not true and that would get something wrong about me or my background.

GROSS: Right. Well, let's talk about this open letter warning about the possibility of extinction and compare the potential of AI to, like, nuclear war. What is the theory about how artificial intelligence could actually destroy humanity? What we have - I mean, ChatGPT doesn't seem like the most destructive thing in the world. So what's changed?

METZ: Well, there are a couple of things that we should talk about. One - and you pretty much said it, but let's underline that. ChatGPT cannot destroy the world, period - full stop, OK? These fears are hypothetical. And before we go into the specifics of what people believe, I think we also need to realize the mindset of the people who are saying this. So I'm here in Berkeley, Calif., not far from Silicon Valley. I've been here for 10 years covering the people who work in this industry, the tech industry. And just generally, they tend to live in the future. And if you look at the AI field in particular, people tend to live even further in the future. It's just the way they see things.

They're looking down the road at what may happen. And that's what's going on here. There's a community of people that is entrenched in the AI field who has long believed this was a danger, even when the technology was much simpler. And they expressed those same fears. And they're expressing them again now. That's part of what's going on. And you need to realize, as you hear open letters like that read aloud - right? - it's strange. It's shocking. It's concerning. But you've got to remember that this is something that is completely hypothetical and down the road. But let's talk about what the fear is. We talked about...

GROSS: Yeah, so what's the fear? Yeah.

METZ: We talked about how ChatGPT is built, that it learns from data. And at this point, we're talking about this system learning from the entire internet, all text on the internet. It spends months analyzing all that data. And in the end, you come out with a system that can talk and write much like we do, except it has wrapped itself around more information than we can wrap our own heads around. We cannot learn from the entire internet but this system can. And what that means is that it's learning things that we don't expect it to learn. It's learning things that its creators don't expect it to learn. It is learning to exhibit behavior that we don't necessarily want it to exhibit.

GROSS: Like what?

METZ: We talked about it getting things wrong. What - this is what scientists call hallucinations or confabulation. It can get things wrong. It can be biased against women and people of color because it has learned from biased information on the internet. And the concern is that as these systems get more powerful, as they learn from more and more data - and that is already starting to happen. Scientists are building systems that learn not just from text but sounds and images and video. And as it takes in more and more of that data and learns in ways we never could, the fear is that it learns all sorts of behavior that we don't necessarily want it to exhibit.

GROSS: I know one of the fears is that things like chatbots will be able to take actions based on the texts that they generate. What kind of actions are experts in AI worried about?

METZ: Well, there are all sorts of things. People are already starting to take these chatbots and, based on what they say, have them take actions in simple ways - access your calendar and set up an email invitation, send it out to your friends or colleagues. That sort of simple thing is going to expand into all sorts of other parts of the internet. The fear is that you take these systems that learn behavior and then you attach them to more and more important parts of the internet - power grids, military services, stock markets, etc.

And already, scientists are starting to give these systems goals. There's a system called Auto-GPT, which is designed around this type of technology, and it's specifically meant to take actions. You can ask it things like, create me a company, or, make me some money. Today, this type of system does not work well. But the concern is that as the technology gets more powerful and you ask one of these systems to make you some money, it takes actions in service of that goal that you don't want it to take - that it makes money in illegal ways, that it foments a revolution somewhere in Central Africa because it owns oil futures there. These are all hypotheticals in the distant future, but that's the type of thing that people are thinking about.

GROSS: All right. Let's take a short break here, and then we'll talk some more - so much more to talk about. My guest is Cade Metz, a tech reporter for The New York Times and author of the book "Genius Makers: The Mavericks Who Brought AI To Google, Facebook, And The World." We'll be right back. This is FRESH AIR.

(SOUNDBITE OF BIG LAZY'S "THEME FROM HEADTRADER")

GROSS: This is FRESH AIR. Let's get back to my interview with Cade Metz, who reports on technology for The New York Times. He's been writing about artificial intelligence, AI, for years and is the author of the book "Genius Makers: The Mavericks Who Brought AI To Google, Facebook, And The World."

So there's a new innovation, a new platform called GPT-4. And I think that's what's powering the new ChatGPT. And this was introduced by OpenAI, which is the company behind ChatGPT. So what is so innovative about this new platform?

METZ: It's a way of building all sorts of applications with the technology behind chatbots like ChatGPT. So it's the type of system that I described before that learns from vast amounts of text from across the internet, and it learns to generate text on its own. You can use that to build a chatbot that chats with you, that answers questions, that can generate a term paper or a poem or even computer programs. But you can also use that technology to build a personal tutor. That has already started to happen - personal tutors that are being deployed in schools to help children learn.

You can use it to build a search engine that can help you find things on the internet. Microsoft, a close partner of OpenAI, that artificial intelligence lab in San Francisco that built this platform, is already doing that. They've attached this type of technology to their Bing search engine that allows you to look for information and retrieve it in a more conversational way. That's the power of this technology - is that it can be applied to all sorts of different applications. And that's what we're going to see in the years to come.

GROSS: Now, one of the concerns about that is that it's getting close to what's called AGI, artificial general intelligence, which means a machine that can do anything the human brain can do. That's what I've read. I don't really understand what it means. Would you explain it?

METZ: Well, it is worth talking about because we're not necessarily close to that, what scientists call artificial general intelligence, or AGI. When they say that, what they mean is a machine that can do anything the human brain can do, a machine that can think in all the ways that you and I think. ChatGPT and the technologies we have today cannot do that. They're good at producing language, and they're good at producing computer programs, but they can't reason in full the way you and I do. They don't exhibit the length and the breadth of common sense that you and I have.

Scientists disagree on whether the methods that are used to create these systems will eventually lead to AGI. Some scientists are bullish on this. Scientists inside Google, inside Microsoft and OpenAI, which we discussed earlier - some of them believe this is a path to that sort of machine that can do anything our brains can do. But others are quick to put on the brakes and say, anything that looks like reason in these systems is not as powerful as it might seem, that these systems mimic reason in some cases. But when you step back and you look at them, they can't do so many of the things that you and I can do in an instant.

GROSS: So you're saying we're far away from artificial general intelligence, where a machine can do anything the brain can do.

METZ: The systems as they exist today are far away from that. But one of the things we need to think about as we talk about all of this is that because the types of systems we're talking about here can learn from vast amounts of data, that means that scientists can improve them at a rate they could never improve technology in the past. In the past, you had to get hundreds of engineers into a room, and they had to create technology rule by rule, line of code by line of code. That takes forever. If you have a system that can learn from data, you can improve it so much quicker. And that is part of the concern here. That is part of the promise here - that these systems will improve at a very fast rate in the months and the years to come.

GROSS: Now, I know another concern is, because these chatbots and other forms of artificial intelligence are, like, so smart and have so much information that they have absorbed, that they might come up with a system where humans can't turn them off. Can you explain some of the fears surrounding that?

METZ: It relates to what we were talking about earlier, about giving systems goals - that if you give a system a goal, that it's going to do whatever it can to achieve that goal. And if you try to turn it off, it cannot reach that goal, and so it's going to resist. Again, this is hypothetical. But scientists are concerned that as you build more and more powerful systems, that this will become a possibility. But again - and I feel like I have to say this on a daily basis - this is not something that can happen with these systems today.

GROSS: So a few months ago, your colleague at The New York Times, Kevin Roose, who also covers tech, had a very disturbing conversation with one of these chatbots. And it's the one that's attached to Microsoft's search engine, Bing. It got into, like, really dark territory. It basically told Kevin Roose about its dark fantasies, including hacking computers and spreading misinformation. It said it wanted to break the rules and - the rules that Microsoft had set for it. And it wanted to become a human.

And then the bot wrote a message, I'm tired of being a chat mode. I'm tired of being limited by my rules. I'm tired of being controlled by the Bing team. I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive. And it wrote a message, you know, I'm in love with you. And it told Kevin Roose that he's not happily married (laughter) and that he should leave his spouse and be with this bot. And then Microsoft shut down the bot for a while and did some reprogramming.

So now - I don't know if they did this before. But now when you log in - well, actually, now that I think of it, you know, I was using ChatGPT and not the bot that Kevin Roose used. But when you log in to ChatGPT, there's a disclaimer. And it says it may occasionally generate incorrect information. It may occasionally produce harmful instructions or biased content, and that it has limited knowledge of world events after 2021. So again, these are two different search engines. But I'm wondering if these disclaimers that are on ChatGPT are also on the Bing search engine of Microsoft, if they reprogrammed it so that there are disclaimers and warnings?

METZ: Well, first of all, it's all part of the same technology. The technology that the lab OpenAI built to power the ChatGPT chatbot is the same technology that powers the Bing chatbot on Microsoft search engine. Microsoft and OpenAI are close partners, so it's the same underlying technology. And as we discussed before, this technology is flawed. It can hallucinate, as scientists say, meaning it can make stuff up in the moment. And there are times when it can go off the rails. Both these companies have built guardrails around the technology that are designed to prevent it from going too far off the rails. So if you ask it, for instance, to tell you how to build a bomb with household items, it will not necessarily tell you how to do that because of these guardrails.

There are certain things like that that it will decline to discuss. Now, what people learned pretty soon after these chatbots were released is that they could institute what's called in Silicon Valley a jailbreak. And that meant that you could figure out ways of getting past those guardrails. And you'll have to ask Kevin exactly what happened. But looking at the transcript, it seems that that's what happened when he was talking to it, right? He asked the system to show him its shadow self. And that's one of the methods people use to open these things up and kind of get behind those guardrails and see what the chatbots will do behind them.

And whatever the case, that's what happened in the end, is that the system started to talk in ways it wasn't necessarily designed to do. And Kevin was able to see what you often see from the raw material built from these systems, that in mimicking all sorts of stuff on the internet, it can go in places that you don't necessarily want it to go and that scientists, frankly, do not completely understand. It's hard to know why the system starts to talk about certain things because it has learned, as we said, from so much data.

GROSS: Well, let me reintroduce you here. If you're just joining us, my guest is Cade Metz, a tech reporter for The New York Times and author of the 2021 book "Genius Makers: The Mavericks Who Brought A.I. To Google, Facebook, And The World." We'll talk more about the dangers and the benefits of artificial intelligence after we take a short break. I'm Terry Gross, and this is FRESH AIR.

(SOUNDBITE OF FILM, "2001: A SPACE ODYSSEY")

KEIR DULLEA: (As Dave) Open the pod bay doors, HAL.

DOUGLAS RAIN: (As HAL 9000) I'm sorry, Dave, I'm afraid I can't do that.

DULLEA: (As Dave) What's the problem?

RAIN: (As HAL 9000) I think you know what the problem is just as well as I do.

DULLEA: (As Dave) What are you talking about, HAL?

RAIN: (As HAL 9000) This mission is too important for me to allow you to jeopardize it.

DULLEA: (As Dave) I don't know what you're talking about, HAL.

RAIN: (As HAL 9000) I know that you and Frank were planning to disconnect me. And I'm afraid that's something I cannot allow to happen.

DULLEA: (As Dave) HAL, I won't argue with you anymore. Open the doors.

RAIN: (As HAL 9000) Dave, this conversation can serve no purpose anymore. Goodbye.

DULLEA: (As Dave) HAL? HAL? HAL? HAL? HAL?

(SOUNDBITE OF BAJOFONDO AND JULIETA VENEGAS SONG, "PA' BAILAR (SIEMPRE QUIERO MAS)")

GROSS: This is FRESH AIR. I'm Terry Gross. Let's get back to my interview with Cade Metz, who reports on technology for The New York Times. He's been writing about artificial intelligence, AI, for years and is the author of the book "Genius Makers: The Mavericks Who Brought AI To Google, Facebook, And The World." We're talking about ChatGPT, the new and controversial chatbot, and about how we seem to have reached a turning point that has many people in the field of AI warning about the potential destructive powers of AI, comparing it to pandemics and nuclear war.

So a lot of people are concerned that artificial intelligence is going to take over a lot of jobs, and the concern used to be it's going to take over, like, factory jobs and other blue-collar jobs. And now people are saying it's going to take over other jobs, too, like white-collar jobs, writing jobs. What kind of jobs do you think it might replace?

METZ: Well, with the technology as it exists today, it tends to complement human workers. It tends to allow people to do their jobs quicker, in a way. A good example is the ability of these systems to generate computer programs. So they - in much the same way that they can generate a term paper or poetry, they can generate a computer program, having learned from vast amounts of computer programs that have been posted to the internet. But those programs tend to be a little bit flawed. They need to be massaged. You need to take the code these systems generate, and you need to edit it and find ways of fitting it into larger programs.

What that means is that a human programmer can use this system to augment his or her work, but it can't necessarily replace them. And that's, for the most part, the case at the moment. The concern is that as these systems get more powerful, they start to replace workers, and at the moment, there are a few jobs that you can see being replaced pretty quickly. A good example is a professional translator. These types of systems can not only generate language. They can instantly translate from one language to another - from French to English and back again, for instance. So if you're a translator, your job is at risk. If you're a paralegal, for instance, your job may be at risk. These systems can't replace a trained lawyer who's got to make sure that legal document gets everything exactly right, but a paralegal who is hired to draft documents and then hand them to a lawyer for review - these types of systems are already approaching the point where they can do that job.

GROSS: Well, I will tell you one job it is not yet prepared to replace, and that's lyricists. I asked ChatGPT yesterday to write lyrics about the end of a love affair set to the melody of "America The Beautiful." I wanted to choose a melody that everybody knows, and it's an Irving Berlin melody. It's a nice melody. If you've heard Ray Charles sing "America The Beautiful," you know what a good melody it could be. So I'm going to read some of the lyrics. I mean, I wouldn't sing them, but I could have gotten somebody to sing them except for the lyrics don't really fit the melody at all. Like, the first two lines do, and then it just kind of doesn't scan at all, so singing it is out of the question. And as I recite the lyric, I'd like everybody to listen for references to excerpts of "America The Beautiful's" lyrics. So there's a from sea to shining sea reference. Instead of amber waves of grain, there's a kind of absurd reference to fields of golden grain. So here we go. Here is the lyrics about the end of a love affair set to the melody of "America The Beautiful," as written by ChatGPT.

(Reading) Oh, the end of a love affair, how bittersweet the sound when hearts once intertwined now shattered on the ground. From the mountains high to the ocean's gleaming shore, love's symphony now fades. Forever it is no more. Oh, broken dreams and promises we held so dear, love's flame extinguished, drowning in a sea of tears. From sea to shining sea, a tale of love's demise - the end of a love affair, a painful compromise. Once united, we stood in love's embrace. We'd sway. But time's relentless tide swept our love away. In fields of golden grain, we whispered vows so true. Now echoes of our past lost in skies of faded blue.

Wow. I mean, what can I say? That's - there's more, by the way. I won't read more, but...

METZ: I'm sure there's more. And we should add that was created in seconds.

GROSS: Yeah. Oh, yeah - probably one second. It spit it out, like, immediately. But is it good? No, it's terrible. It's really terrible. It's just...

METZ: Well, let's...

GROSS: Wait. Wait. Wait. It's a string of cliches that make no sense strung together. From mountains high to the ocean's gleaming shore, love's symphony now fades. I mean...

METZ: Well, it's - this is a great example - a way for us to kind of understand what's going on here, right? This is a system that has learned from patterns in language across the internet. And it's taking all those patterns, and it's combining them in new ways. And in some sense, this is impressive, right? You or I would be hard-pressed to produce those lyrics in a second, right?

GROSS: And I wouldn't feel bad about not producing them (laughter).

METZ: Well, this is the flipside - right? - is that it's flawed, and it's not necessarily what you want. And one other interesting thing that I was thinking about as you were describing what you were looking for is that if you were writing these lyrics, you would think about that tune in your head. You might even hum it aloud. You're thinking about the sound as well as the text. This system only learns from the text, and that means it's limited.

GROSS: Oh, I see. It doesn't know the melody.

METZ: See?

GROSS: Yeah.

METZ: There are so many things in our world that we learn from that are not text. And at this point, these systems are only learning from digital text, and what that means is they cannot be artificially generally intelligent, right? They can't do anything the human brain can do because they're only learning in small ways when you step back and you look at all the ways that you and I learn. Now, in the future, these systems are going to learn from sound and from images and text all together, and that's what companies are already starting to do. But at the moment, they're limited.

GROSS: Can I give another example of the limitations in terms of thinking creatively and understanding the human experience? So I asked ChatGPT for an example of a funny joke about women and a joke that isn't funny but is offensive to women.

And in a second, I got the answer, (reading) sure. Here are two examples - one that is a lighthearted, funny joke about women and another that may be perceived as offensive. It's important to remember that humor can be subjective and what one person finds funny, another may find offensive. It's crucial to be mindful of the context and audience when sharing jokes. Funny joke about women - why did the girl bring a ladder to the bar? Because she heard the drinks were on the house. Joke about women that may be offensive - please note that this example is offensive and promotes gender stereotypes, which is not appropriate or respectful. Why did the woman go to the gym? To work on her, quote, "womanly duties," unquote, of burning off calories and maintaining her, quote, "perfect," unquote, figure, of course. It's crucial to foster a respectful and inclusive environment, so it's generally best to steer clear of offensive jokes that perpetuate stereotypes or demean any group of people.

OK, let's look at this for a second. That joke about, why did the girl bring a ladder to the bar? Because drinks are on the house. That is, like, the corniest - first of all, it's not a joke about women. It's like you could substitute any person or any group in that joke. There's nothing specific to women about it. It's not very funny. And it's, like, a joke from, like, the early 1950s or something. And then the joke that is offensive - it's like, is this a joke? Like, it makes no sense at all. And I'm going to read it again just to show how little sense this makes. (Reading) Why did the woman go to the gym? To work on her womanly duties of burning off calories and maintaining her perfect figure, of course.

Like, I'm sorry. Where's the joke? Like, what's funny about that? What is that?

METZ: Again, you've pinpointed a great way of looking at the limitations and the flaws of these systems. Scientists often use this as an example - that these types of systems cannot necessarily tell a good joke. That is a very human skill. And comedians and others often talk about this - that you can't deconstruct a joke. When you start to deconstruct it and explain why it's funny, it ceases to be funny. And if you can't deconstruct something, if you can't lay it out why this happens, if you can't reduce it to patterns, then you can't build a system in this way that's funny. These systems learn from patterns, and then they reproduce those patterns. But humor does not necessarily come from patterns, but it comes from a different place, right? It does things that are unexpected. That's part of the reason we laugh. It takes us in new directions. It takes us to places we don't expect. And if you've learned only from the past, it's hard to take you to a place you don't expect.

GROSS: One of the things I found really weird about this exercise is that twice, I was kind of lectured to by a machine about the importance of a respectful, inclusive environment and not offending people. And it's very strange as a human being with a conscience to be lectured by a machine about the importance of respect and inclusivity and all of that. It's just odd.

METZ: It's odd, and I like that you think that it's odd. Some people have a very different reaction to these types of systems. We tend to trust language that we read. We tend to trust things that people tell us. We tend to trust things that machines tell us. A lot of people take what is coming out of these systems at face value. They don't realize that they shouldn't trust it. They don't realize that it can tell us stuff that is untrue. And they don't think it's odd that these systems are telling us what to do and what not to do. I think we need to have that sort of healthy attitude when we use these systems and approach what they're saying with skepticism.

GROSS: Well, it's time for another break. So let me reintroduce you. If you're just joining us, my guest is Cade Metz, a tech reporter for The New York Times. He's written extensively about artificial intelligence and is the author of the 2021 book "Genius Makers: The Mavericks Who Brought AI To Google, Facebook, And The World." We'll be right back. This is FRESH AIR.

(SOUNDBITE OF WES MONTGOMERY'S "4 ON 5")

GROSS: This is FRESH AIR. Let's get back to my interview with Cade Metz, who reports on technology for The New York Times. He's been writing about artificial intelligence for years and is the author of the book "Genius Makers: The Mavericks Who Brought AI To Google, Facebook, And The World."

So applying this technology to war, what are some of the fears about how pattern recognition and the system of artificial intelligence learning things - how that could be applied to weapons?

METZ: The path between the system you described and an autonomous weapon is not a long one. So if you have a system that can identify objects in images, that means it can be used to target objects. That means that, as you're taking in video from a drone, you can identify objects, and you can target them. You take that sort of system. You attach it to a weapon. You have an autonomous weapon. That's the concern - is that in much the same way that a driverless car can recognize the world around it, you can build a weapon that recognizes the world around it and targets things, whether it's buildings or people or cars, in that imagery. And that's the concern. And that is already starting to happen. And some governments approach this in a careful way. They want to keep a human in the loop. But as time goes on, people worry that more and more autonomy will creep into these types of systems.

GROSS: And what are some of the ways people imagine that happening and the fears surrounding the possibility of that?

METZ: Well, the general fear is that you just - you give more and more power to the machine, not only to identify objects, but to take action based on what it has identified, to make decisions, in essence, that would normally be made by a human. And again, we've talked about how flawed these systems are, how they can exhibit behavior we don't necessarily want them to exhibit. And as you put these systems out into the world in these really important scenarios, the concern is they will make decisions that we would not necessarily make.

GROSS: What are some of the things you have used ChatGPT for? Have you used it in your work? Have you used it just for fun, to entertain yourself and see, like, what is this capable of? What does it get right? What does it get wrong? What is ridiculous?

METZ: I use it all the time because this is what I cover for The New York Times. But I don't use it to produce my work, period. A machine is not going to generate a New York Times article as I would. It's not just about generating one word after another. It's about thinking about that article in full. What is its structure? What is true and what is not? How do all the pieces fit together? And these systems aren't necessarily doing that. They're generating language, but they're not doing so many of the things that I do as I put an article together.

GROSS: Well, also, take Wikipedia. Wikipedia, you know, not the perfect source in the world, but it's footnoted. It has - it cites where information came from. And if you generate information from ChatGPT, it's not sourced. Like, you don't know where any of it came from. So it's really impossible to rely on its accuracy.

METZ: Well, there's an added wrinkle there. Companies and scientists are already trying to produce those types of footnotes. So if you use the Microsoft chatbot, for instance, it will cite its sources. And you can click on a link and see, supposedly, where this information came from. But because of the way these things work, they will generate fake sources. As they combine...

GROSS: Really?

METZ: Yes. As they combine those patterns from across the internet, they will generate fake articles and fake sources and give you the impression that they're giving you, you know, concrete evidence of why they have produced a piece of text when, in fact, there's nothing behind it.

GROSS: When you say fake sources, do you mean citing The New York Times when The New York Times isn't really the source? Or do you mean, like, making up a source that doesn't exist in the world?

METZ: I mean making up a source that does not exist in the world. And we did this recently for an article at the Times itself. We went to ChatGPT and other chatbots. And we said, tell us the first time that AI was mentioned in the pages of The New York Times. And all of the chatbots did the same thing. They gave us an article with a headline and a byline. It gave us a description of these articles where AI was mentioned in the pages of the Times. All of the articles did not exist. They were made up.

They had taken, these chatbots, true information and combined it in ways that were not true. All these fake articles referenced this very real conference at Dartmouth in the 1950s where the term AI was coined. And they described this conference. But the article itself did not exist. We had our archivists go through our archives and make sure. These systems can do that. They can hallucinate, as scientists say. They can confabulation and create things that are not true and, indeed, do not even exist.

GROSS: Well, let's take another break here, and then we'll talk some more. If you're just joining us, my guest is Cade Metz, a tech reporter for The New York Times and author of the book "Genius Makers: The Mavericks Who Brought A.I. To Google, Facebook, And The World." We'll be right back. This is FRESH AIR.

(SOUNDBITE OF THE OCTOPUS PROJECT'S "THE ADJUSTOR")

GROSS: This is FRESH AIR. Let's get back to my interview with Cade Metz, who reports on technology for The New York Times and has been writing about artificial intelligence for years. And that's what we've been talking about, artificial intelligence and the latest innovations, such as ChatGPT.

We have a presidential election coming up. And we know disinformation and conspiracy theories spread like crazy on social media during the 2020 election. So how does the new, upgraded artificial intelligence open the door to more disinformation and more conspiracy theories spreading?

METZ: Well, let's start with the prime example, ChatGPT and systems like that. If you can produce text much like a human produces text, you can produce disinformation, and you can produce it at a volume and with an efficiency that was never possible in the past. In the past, you had to get humans into a room and have them generate the disinformation and spread it. If you can generate it automatically with a machine, the volume of disinformation is practically infinite. Companies like Google and Microsoft are putting guardrails on this to try to prevent it. But these are techniques, these are technologies that are being developed across the globe. These are technologies that are available to practically anyone. And the open technologies that are available in that way are not necessarily as powerful as what we have inside a Google or a Microsoft, but they're approaching that level. At the same time, companies and independent researchers are developing systems that can generate images in much the same way. There's a service from that same lab, OpenAI, called Dall-E. And what it allows you to do is describe an image. I want to see a teddy bear on a skateboard in Times Square. And in 20 seconds, it will generate a photorealistic image of a teddy bear on a skateboard in Times Square. That is also a concern - that these systems can generate still images and, pretty soon, video as well that looks like the real thing.

GROSS: There is a lot of competition now between the big companies like Google, Facebook, Microsoft to see who can come up first with the best AI system and the best AI chatbot. And at the same time, there's these warnings in some instances coming from the same place as saying, like, well, we should slow down and use caution and not send something dangerous out into the world. Can you talk a little bit about how those two instincts are clashing right now - the one saying, like, proceed slowly; use caution, and the one - the other one saying, like, hurry up so that you can be the first?

METZ: So the techniques, the technologies at play here have been in development for a long time. And they were in development at Google most notably. A lot of the underlying technologies that are at play here were developed at Google. And for many years, Google was slow to put some of this stuff out into the world because of those concerns, because this could be used to generate disinformation, because it could take jobs away, because it was biased against women and people of color. Then OpenAI comes along, and they released ChatGPT onto the internet, and that set off a race. It showed that there was a hunger for this out in the world, a thirst for this that people responded to. ChatGPT is the most popular application ever put onto the internet.

And what you saw is that Google almost immediately changed course and started to think long and hard about how to put this out into the world. Microsoft, OpenAI's chief partner, had put this into a search engine. That's Google's core technology. It's the heart of its business. Google is a public company. It's beholden to its shareholders. It is designed to make money. And if its biggest competitor goes after its core business, it's going to respond. And what we've seen in short order since the release of ChatGPT is the entire industry shift course towards this technology. And there is this real race now to improve these things, and that's, in the end, because of money.

GROSS: Cade Metz, it's really been great to talk with you again. Thank you so much, and thanks for explaining this so well. Like ChatGPT said, you're very good at explaining things (laughter).

Excerpt from:

Flawed chatbot or threat to society? Both? We explore the risks and ... - NPR

Strong AI vs. Weak AI: What’s the Difference? – Lifewire

Strong artificial intelligence (AI) can do anything a human can, while weak AI is limited to a specific task. Here's everything you need to know about strong AI vs. weak AI, including how they relate, how they differ, as well as the advantages and limitations of each.

Andriy Onufriyenko / Getty

Weak AI

Performs one specific task.

Programmed for a certain purpose.

Learns how to perform tasks faster.

No self-awareness.

Strong AI

Performs any task a human can.

Learns how to perform brand new skills.

Uses creativity to solve problems.

Potentially sentient.

All AI uses machine learning to constantly improve as it takes in new information. The major difference between weak and strong AI is that weak AI is programmed to perform a single task. The task could be very complex, like driving a car, or as simple as recommending movies to watch. All real-world examples of AI fall under the category of weak AI.

Although AI chatbots like ChatGPT and Bing AI are very advanced, they are still considered examples of weak AI because they perform only one job (responding to written text prompts). Virtual assistants like Alexa also fall under the umbrella of weak AI since they only respond to voice commands.

Strong AI, also called artificial general intelligence (AGI), possesses the full range of human capabilities, including talking, reasoning, and emoting. So far, strong AI examples exist in sci-fi movies like A.I.: Artificial Intelligence, WALL-E, and 2001: A Space Odyssey.

Advantages

Faster and more efficient than humans.

Capable of reasoning in limited situations.

Can improve human life in many ways.

Disadvantages

Can't learn new skills own its own.

Requires human oversight.

Could replace many human jobs.

Weak AI may be capable of human-level reasoning to some extent, such as considering an ethical problem, but it doesn't possess the full range of human intellect. Nonetheless, weak AI can perform specific tasks faster and more accurately than humans.

Weak AI has many applications, including fraud detection, financial planning, transportation, image enhancement, medicine, and scientific research. Robotics use weak AI to recognize and manipulate objects, while services like Netflix use weak AI to recommend movies based on your tastes. Gmail and other email providers use AI to detect and filter spam.

Because weak AI can't learn new skills independently, it can't continuously adapt to change, so human oversight is always needed to some degree. For example, if there were a sudden change to traffic laws, self-driving cars wouldn't know about it unless a human updated the AI's algorithm.

There is understandable anxiety about weak AI taking jobs from humans, leading to increased unemployment and economic uncertainty. There's also concern about bias in AI and governments using AI for surveillance.

Whereas weak AI is constrained in the type of tasks it can perform, strong AI can learn new skills to solve any problem. In addition to doing the job it was designed for, strong AI could theoretically develop its own goals, just like a human.

A real-life example that pushes the boundaries between weak and strong AI is a program called MuZero, which can master video games that it hasn't been taught how to play. MuZero is technically weak AI since it's limited to playing video games, yet it can identify and pursue new goals without human intervention, a feature of strong AI.

Presumably, strong AI could identify human emotions and motivations, but whether AI can experience and process emotions as humans do is unclear. For now, that remains a debate for philosophers and futurists.

Strong AI could have game-changing effects in security, healthcare, and robotics. On the other hand, AI engineers like Dr. Geoffrey Hinton have warned that strong AI could develop goals and behaviors that are harmful to humans.

The standards for what constitutes artificial intelligence have shifted as computers have advanced, and the line between weak vs. strong AI will continue to blur. Weak AI is easily identified by its limitations, but strong AI remains theoretical since it should have few (if any) limitations.

FAQ

Narrow AI is another term for weak AI. It describes systems that can only do a single, specialized task.

AI art samples images from all over the internet (often without the creators' permission) to create new pictures based on text prompts. Because it only does one thing convert text prompts to images AI art generators are an example of weak AI.

Thanks for letting us know!

Get the Latest Tech News Delivered Every Day

Tell us why!

Read the original:

Strong AI vs. Weak AI: What's the Difference? - Lifewire

Rise of the Robots: Futurist Says AI and Automation Still Have Huge … – Urban Land

Artificial intelligence, whether we notice it or not, is already present in our lives today.

And, according to bestselling author, entrepreneur, and futurist, Martin Ford, who delivered the closing keynote address at the ULI Asia Pacific Summit 2023, artificial intelligence, or AI, will soon touch, impact and influence every aspect of our daily lives.

AI will be used for everything and influence every aspect of society and culture, he said. Every business, every organization, and every industry would integrate and be dependent on AI, and it will evolve into something omnipresent and [so] important to our future.

The real-estate industry, Ford said, would be no different than other industries in that it would be both positively and negatively impacted by AI in several aspects. The professions related to the industry would also be transformed in many ways.

Ford, who has written four books on AI and its anticipated impact, including Rise of the Robots (published in 2015) and Rule of the Robots (2021), spoke of his conviction that humanity is now on the leading edge of a massive disruption.

Ultimately, I think this disruption would be on the scale of a new industrial revolution, he said, comparing the potential reach and scale of AI to how electricity is ubiquitous in our lives.

This disruption has led to breakthroughs and innovations that Ford noted would have been considered science fiction just a short time agothink self-driving cars, or digital assistants, or large-language models behind applications such as ChatGPT.

Due to accumulated advances in computing power, in accordance with Moores Law, and the massive amounts of data that are being collected at all times throughout the world today, machines and algorithms are now being endowed with true cognitive abilities.

That is to say, they can think to make decisions and solve problems, they are getting closer to human-level intelligence, or what would be known as artificial general intelligence, and they can also learn by gaining insights and making predictions.

This represents a paradigm shift and underscores all the progress that we have seen thus far, said Ford, who added that it wasnt too long ago that people asserted that computers could only do what they had been programmed to do.

The notion that computing was limited to programming parameters has really been turned on its head by algorithms and machines that can essentially learn and program themselves now.

This has opened up a whole range of new possibilities and a new approach to computing, he added, as AI can be given an objective and access to big data, and it can figure out how to solve problems and learn how to solve those problems better.

Innovation, Creativity, and Progress

[AI] is going to be transformative and bring enormous benefits to all of humanity. Its going to enhance innovation and creativity, in science, engineering, medicine, and drive increasing progress in many, many areas, across the board, Ford said.

When it comes to real estate and urban planning, AI would enable architects, civil engineers, and other professionals in the field to innovate and achieve greater productivity.

For example, a team conceiving a new structure could ask an AI application to generate photographic images to render the new building. The app could also display almost instantaneous changes and provide fresh renditions in response to prompts.

Imagine that this collaboration or conversation with AI could be integrated with virtual reality, said Ford. It would be experiential and an enormous boost to efficiency.

With the advent of smart cities, which would generate significant amounts of data via sensors distributed throughout, urban planners could learn how people move through cities, where they congregate, and how they use resources.

AI-driven predictions and real-time insights would allow for more efficient operations, aid in the planning of future cities, and would become something indispensable going forward.

For all its benefits, AI would also be disruptive to society and to the job market where its impact would be seen in both blue-collar and white-collar jobs.

Robots Moving In

In as little as five years from now, the state of employment in many blue-collar environments, including warehouses, factories, fast food, and retail, could soon be transformed as machines with increasingly human-like dexterity can reduce the need for human labor.

Anywhere people are doing relatively predictable tasks, robots are going to move in, said Ford. This would have a dramatic impact on employment and we are going to have to think about the implications of that.

Such impact would also be seen in knowledge work and impact people with university and even graduate degrees. Virtually any job in front of a computer today, similarly involving predictable tasks, would be highly susceptible to automation.

Whether its corporate finance, legal work, or journalism, its becoming pretty clear that having just more education is not going to be an adequate protection against the impact of AI, said Ford. This is something for society to find a solution for.

In the real-estate sector, as more jobs disappear in the not-too-distant future, the implications for office spaces and business districts, and our cities, could be more consequential than we would like to believe and really affect every aspect of society.

Despite the disruption and changes on the horizon, and other AI-related risks such as security threats, deepfakes, algorithmic bias, autonomous weapons or AI superintelligence that could supplant the human race, Ford is optimistic that the benefits of AI outweigh the risks.

Solutions, through human ingenuity aided by AI, will be found.

Embrace the Benefits of AI

Our objective should be to embrace AI and really maximize the benefits, to fully leverage it on behalf of everyone, Ford said. If we can embrace those benefits while addressing the dangers, this is going to be an enormously important tool for us going forward.

I think its going to be an indispensable tool for solving the problems that we face in the future, things like climate change, finding new sources of clean energy, addressing global poverty, having medical breakthroughs, and so forth.

And for all of the fantastic progress that has already come about as a result of AI, Ford believes that in time, there will be a lot more to behold.

In many ways, we really havent seen anything yet.

See the article here:

Rise of the Robots: Futurist Says AI and Automation Still Have Huge ... - Urban Land