Artificial Intelligence Godfathers Call for Regulation as Rights … – Democracy Now!

This is a rush transcript. Copy may not be in its final form.

AMY GOODMAN: This is Democracy Now!, democracynow.org, The War and Peace Report. Im Amy Goodman, with Nermeen Shaikh.

We begin todays show looking at growing alarm over the potential for artificial intelligence to lead to the extinction of humanity. The latest warning comes from hundreds of artificial intelligence, or AI, experts, as well as tech executives, scholars and others, like climate activist Bill McKibben, who signed onto an ominous, one-line statement released Tuesday that reads, quote, Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Among the signatories to the letter, released by the Center for AI Safety, is Geoffrey Hinton, considered one of three godfathers of AI. He recently quit Google so he could speak freely about the dangers of the technology he helped build, such as artificial general intelligence, or AGI, in which machines could develop cognitive abilities akin or superior to humans sooner than previously thought.

GEOFFREY HINTON: I had always assumed that the brain was better than the computer models we had. And Id always assumed that by making the computer models more like the brain, we would improve them. And my epiphany was, a couple of months ago, I suddenly realized that maybe the computer models we have now are actually better than the brain. And if thats the case, then maybe quite soon theyll be better than us, so that the idea of superintelligence, instead of being something in the distant future, might come much sooner than I expected.

For the existential threat, the idea it might wipe us all out, thats like nuclear weapons, because nuclear weapons have the possibility they would just wipe out everybody. And thats why people could cooperate on preventing that. And for the existential threat, I think maybe the U.S. and China and Europe and Japan can all cooperate on trying to avoid that existential threat. But the question is: How should they do that? And I think stopping development is infeasible.

AMY GOODMAN: Many have called for a pause on introducing new AI technology until strong government regulation and a global regulatory framework are in place.

Joining Hinton in signing the letter was a second AI godfather, Yoshua Bengio, who joins us now for more. Hes a professor at the University of Montreal, founder and scientific director of the Milathe Quebec Artificial Intelligence Institute. In 2018, he shared the prestigious computer science prize, the Turing Award, with Geoffrey Hinton and Yann LeCun.

Professor Bengio is a signatory of the Future of Life Institute open letter calling for a pause on large AI experiments.

Professor Bengio, welcome to Democracy Now! Its great to have you with us as we talk about an issue that I think most people cannot begin to comprehend. So, if you could start off by talking about why youve signed this letter warning of extinction of humanity? But talk about what AI is, first.

YOSHUA BENGIO: Well, thanks for having me, first. And thanks for talking about this complicated issue that requires more awareness.

The reason I signed this and like Geoff, I changed my mind in the last few months. What triggered this change for me is interacting with ChatGPT and seeing how far we had moved, much faster than I anticipated. So, I used to think that reaching human-level intelligence with machines could take many more decades, if not centuries, because the progress of science seemed to be, well, slow. And we were as researchers, we tend to focus on what doesnt work. But right now we have machines that pass what is called the Turing test, which means they can converse with us, and they could easily fool us as being humans. That was supposed to be a milestone for, you know, human-level intelligence.

I think theyre still missing a few things, but that kind of technology could already be dangerous to destabilize democracy through disinformation, for example. But because of the research that is currently going on to bridge the gap with what is missing from current large language models, large AI systems, it is possible that my, you know, horizon that I was seeing as many decades in the future is just a few years in the future. And that could be very dangerous. It suffices that just a small organization or somebody with crazy beliefs, conspiracy theory, terrorists, a military organization decides to use this without the right safety mechanisms, and it could be catastrophic for humanity.

NERMEEN SHAIKH: So, Professor Yoshua Bengio, it would be accurate then to say that the reason artificial intelligence and concerns about artificial intelligence have become the center of public discussion in a way that theyve not previously been, because the advances that have occurred in the field have surprised even those who are participating in it and the lead researchers in it. So, if you could elaborate on the question of superintelligence, and especially the concerns that have been raised about unaligned superintelligence, and also the speed at which we are likely to get to unaligned superintelligence?

YOSHUA BENGIO: Yeah. I mean, the reason it was surprising is that in the current systems, from a scientific perspective, the methods that are used are not very different from the things we only knew just a few years ago. Its the scale at which they have been built, the amount of data, the amount of engineering, that has made this really surprising progress possible. And so we could have similar progress in the future because of the scale of things.

Now, the problem first of all, you know, theres an important why do we why are we concerned about superintelligence? So, first of all, the question is: Is it even possible to build machines that will be smarter than us? And the consensus in the scientific community, for example, from the neuroscience perspective, is that our brain is a very complicated machine, so theres no reason to think that, in principle, we couldnt build machines that would be at least as smart as us. Now, then theres the question of how long its going to take. But weve discussed that. In addition, as Geoff Hinton was saying in the piece that was heard, computers have advantages that brains dont have. For example, they can talk to each other at very, very high speed and exchange information. For us, we are limited by the very few bits of information per second that language allows us to do. And that actually gives them a huge advantage to learn a lot faster. So, for example, these systems today already can read the whole internet very, very quickly, whereas a human would require 10,000 years of their life reading all the time to achieve the same thing. So, they can have access to information and sharing of information in ways that humans dont. So its very likely that as we make progress towards understanding the principles behind human intelligence, we will be able to build machines that are actually smarter than us.

So, why is it dangerous? Because if theyre smarter than us, they might act in ways that are not that do not agree with what we intend, what we want them to do. And it could be for several reasons, but this question of alignment is that its actually very difficult to state to instruct a machine to behave in a way that agrees with our values, our needs and so on. We can say it in language, but it might be understood in a different way, and that can lead to catastrophes, as has been argued many times.

But this is something that already happens I mean, this alignment problem already happens. So, for example, you can think of corporations not being quite aligned with what society wants. Society would like corporations to provide useful goods and services, but we cant, like, dictate that to corporations directly. Instead, weve given them a framework where they maximize profit under the constraints of laws, and that may work reasonably but also have side effects. For example, corporations can find loopholes in those laws, or, even worse, they could influence the laws themselves.

And this sort of thing can happen with AI systems that were trying to control. They might find ways to satisfy the letter of our instructions, but not the intention, the spirit of the law. And thats very scary. We dont fully understand how these scenarios can unfold, but theres enough danger and enough uncertainty that I think a lot of attention more attention should be given to these questions.

NERMEEN SHAIKH: If you could explain whether you think it will be difficult to regulate this industry, artificial intelligence, despite all of the advances that have already occurred? How difficult will regulation be?

YOSHUA BENGIO: Even if something seems difficult, like dealing with climate change, and even if we feel that its a hard task to do the job and to convince enough people and society to change in the right ways, we have a moral duty to try our best.

And the first things we have to do with AI risks is get on with regulation, set up governance frameworks, both in individual countries and internationally. And when we do that, its going to be useful for all the AI risks because weve been talking a lot about the extinction risk, but there are other risks that are shorter-term, risks to destabilize democracy. If democracy is destabilized, this is bad in itself, but it actually is going to also hurt in our abilities to fight to deal with the existential risk.

And then there are other risks that are actually going on with AI discrimination, bias, privacy and so on. So we need to beef up that legislative and regulatory body. And what we need there is a regulatory framework thats going to be very adaptive, because theres a lot of unknown. Its not like we know precisely how bad things can happen. We need to do a lot more in terms of monitoring, validating, and we need and controlling access so that not any bad actor can easily get their hands on dangerous technologies. And we need the body that will regulate, or the bodies across the world, to be able to change their rules as new nefarious users show up or as technology advances. And thats a challenge, but I think we need to go in that direction.

AMY GOODMAN: I want to bring Max Tegmark into the conversation. Max Tegmark is MIT professor focused on artificial intelligence, his recent Time magazine article, The Dont Look Up Thinking That Could Doom Us With AI.

If you could explain that point, Professor Tegmark?

MAX TEGMARK: Yes.

AMY GOODMAN: And also, why you think right now you know, many people have just heard the term ChatGPT for the first time in the last months. The general public has become aware of this. And how you think it is most effective to regulate AI technology?

MAX TEGMARK: Yeah. Thank you for the great question.

I wrote this piece comparing whats happening now in AI with the movie Dont Look Up, because I really [inaudible] were all living this film. Were, as a species, confronting the most dramatic thing that has ever happened to us, where we may be losing control over our future, and almost no one is talking about it. So Im so grateful to you and others for actually starting to have that conversation now. And thats, of course, why we had these open letters that you just referred to here, to really help mainstream this conversation that we have to have, that people previously used to make fun of you when you even brought up the idea that we could actually lose control of this and go extinct, or example.

NERMEEN SHAIKH: Professor Tegmark, youve drawn analogies, in fact, when it comes to regulation, with the regulations that were put in place on biotech and physics. So, could you explain how that might apply to artificial intelligence?

MAX TEGMARK: Yeah. To appreciate what a huge deal this is, when the top scientists in AI are warning about extinction, its good to compare with the other two times in history that its happened, that leading scientists warned about the very thing they were making. It happened once in the 1940s, when physicists started warning about nuclear Armageddon, and it happened again in the early 1970s with biologists saying, Hey, maybe we shouldnt start making clones of humans and edit the DNA of our babies.

And the biologists have been the big success story here, I think, that should inspire us AI researchers today, because it was deemed so risky that we would lose control over our species back in the '70s that we actually decided as a world society to not do human cloning and to not edit the DNA of our offspring. And here we are with a really flourishing biotech industry that's doing so much good in the world.

And so, the lesson here for AI is that we should become more like biology. We should recognize that, in biology, no company has the right to just launch a new medicine and start selling it in supermarkets without first convincing experts from the government that this is safe. Thats why we have the Food and Drug Administration in the U.S., for example. And with particularly high-risk uses of AI, we should aspire to something very similar, where the onus is really on the companies to prove that something extremely powerful is safe, before it gets deployed.

AMY GOODMAN: Last fall, the White House Office of Science and Technology Policy published a Blueprint for an AI Bill of Rights and called it A Vision for Protecting Our Civil Rights in the Algorithmic Age. This comes amidst growing awareness about racial biases embedded in artificial intelligence and how impacts the use of facial recognition programs by law enforcement and more. I want to bring into this conversation, with professors Tegmark and Bengio, Tawana Petty, director of policy and advocacy at the Algorithm Justice League, longtime digital and data rights activist.

Tawana Petty, welcome to Democracy Now! You are not only warning people about the future; youre talking about the uses of AI right now and how they can be racially discriminatory. Can you explain?

TAWANA PETTY: Yes. Thank you for having me, Amy. Absolutely.

I must say that the contradictions have been heightened with the godfather of AI and others speaking out and authoring these particular letters that are talking about these futuristic potential harms. However, many women have been warning about the existing harms of artificial intelligence many years prior to now Timnit Gebru, Dr. Joy Buolamwini and so many others, Safiya Noble, Ruha Benjamin, and so and Dr. Alondra Nelson, what you just mentioned, the Blueprint for an AI Bill of Rights, which is asking for five core principles: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives consideration and fallback.

And so, at the Algorithmic Justice League, we have been responding to existing harms of algorithmic discrimination that date back many years prior to this all most robust narrative-reshaping conversation that has been happening over the last several months with artificial general intelligence. So, were already seeing harms with algorithmic discrimination in medicine. Were seeing the pervasive surveillance that is happening with law enforcement using face detection system to target community members during protests, squashing not only our civil liberties and rights to organize and protest, but also the misidentifications that are happening with regard to false arrests, that weve seen two very prominent cases started off in Detroit.

And so, there are many examples of existing harms that it would have been really great to have these voices of mostly white men who are in the tech industry, who did not pay attention to the voices of all those women who were lifting up these issues many years ago. And theyre talking about these futuristic possible risks, when we have so many risks that are happening today.

NERMEEN SHAIKH: So, Professor Max Tegmark, if you could respond to what Tawana Petty said, and the fact that others have also said that the risks have been vastly overstated in that letter, and, more importantly, given what Tawana has said, that it distracts from already-existing effects of artificial intelligence that are widely in use already?

MAX TEGMARK: I think this is a really important question here. There are people who say that one of these kinds of risks distracts from the other. I strongly support everything we heard here from Tawana. I think these are all very important problems, examples of how were giving too much control already to machines. But I strongly disagree that we should have to choose about worrying about one kind of risk or the other. Thats like saying we should stop working on cancer prevention because it distracts from stroke prevention.

These are all incredibly important risks. I have spoken up a lot on social justice risks, as well, and threats. And, you know, it just plays into the hands of the tech lobbyists, if they can if it looks like theres infighting between people who are trying to rein in Big Tech for one reason and people who are trying to rein in Big Tech for other reasons. Lets all work together and realize that society just like society can work on both cancer prevention and stroke prevention. We have the resources for this. We should be able to deal with all the crucial social justice issues and also make sure that we dont go extinct.

Extinction is not something in the very distant future, as we heard from Yoshua Bengio. We might be losing total control of our society relatively soon. It can happen in the next few years. It could happen in a decade. And once were all extinct, you know, all these other issues cease to even matter. Lets work together, tackle all the issues, so that we can actually have a good future for everybody.

AMY GOODMAN: So, Tawana Petty, and then I want to bring back in Yoshua Bengio Tawana Petty, what needs to happen at the national level, you know, U.S. regulation? And then I want to compare whats happening here, whats happening in Canadian regulation, the EU, European Union, which seems like its about to put in the first comprehensive set of regulations, Tawana.

TAWANA PETTY: Right, absolutely. So, the blueprint was a good model to start with, that were seeing some states adopt and try to roll out their versions of an AI Bill of Rights. The president issued an executive order to strengthen racial equity and support underserved communities across the federal government, which is addressing specifically algorithmic discrimination. You have the National Institute of Standards and Technology that issued an AI risk management framework, that breaks down the various types of biases that we find within algorithmic systems, like computational, systemic, statistical and human cognitive.

And there are so many other legislative opportunities that are happening on the federal level. You see the FTC speaking up, the Federal Trade Commission, on algorithmic discrimination. You have the Equal Employment Opportunity Corporation that has issued statements. You have the Consumer Financial Protection Bureau, who has been adamant about the impact that algorithmic systems have on us when data brokers are amassing these mass amounts of data that have been extracted from community members.

So, I agree that there needs to be some collaboration and cooperation, but weve seen situations like Dr. Timnit Gebru was terminated from Google for warning us before ChatGPT was launched upon the millions of people as a large language model. And so, cooperation has not been lacking on the side of the folks who work in ethics. To the contrary, these companies have terminated their ethics departments and people who have been warning about existing harms.

AMY GOODMAN: And, Professor Bengio, if you can talk about the level of regulation and what you think needs to happen, and who is putting forward models that you think could be effective?

YOSHUA BENGIO: So, first of all, Id like to make a correction here. I have been involved in really working towards dealing with the negative social impact of AI for many years. In 2016, I worked on the Montreal Declaration for the Responsible Development of AI, which is very much centered on ethics and social injustice. And since then, Ive created an organization, the AI for Humanity department, in the research center that I head, which is completely focused on human rights. So, I think these accusations are just false.

And as Max was saying, we dont need to choose between fighting cancer and fighting heart disease. We need to do all of those things. But better than that, what is needed in the short term, at least, building up these regulations is going to help to mitigate all those risks. So I think we should really work together rather than having these accusations.

NERMEEN SHAIKH: Professor Bengio, Id like to ask you about precisely some of the work that you have done with respect to human rights and artificial intelligence. Earlier this month, a conference on artificial intelligence was held in Kigali, Rwanda, and you were among those who were pushing for the conference to take place in Africa.

YOSHUA BENGIO: Thats right.

NERMEEN SHAIKH: Could you explain what happened at that conference 2,000 people, I believe, attended and what African researchers and scientists had to say, you know, about what the goods are, the public good that could come from artificial intelligence, and why they felt, in fact one of the questions that was raised is: Why wasnt there more discussion about the public good, rather than just the immediate risks or future risks?

YOSHUA BENGIO: Yes. Ive been working in addition to the ethics questions, Ive been working a lot on the applications of AI in the area of whats called AI for social good. So, that includes things like medical applications, environmental applications, social justice applications. And in those areas, it is particularly important that we bring to the fore the voices of the people who could the most benefit and also the most suffer from the development of AI. And in particular, the voices of Africans have not been very present. As we know, the development of this technology has been mostly in rich countries in the West.

And so, as a member of the board of the ICLR conference, which is one of the main conferences in the field, Ive been pushing for many years for us to have the event taking place in Africa. And so, this year was the first, after Amy, it was supposed to be before the pandemic, but, well, it was pushed. And what we saw is an amazing presence of African researchers and students at levels that we couldnt see before.

And the reason I mean, there are many reasons, but mostly its a question of accessibility. Currently, many Western countries, the visas for African researchers or from developing countries are very difficult to get. Ive been fighting, for example, the Canadian government a few years ago, when we had the NeurIPS conference in Canada, and there were hundreds of African researchers who were denied a visa, and we had to go one by one in order to try to make them come.

So, I think that its important that the decisions were going to take collectively, which involve everyone on Earth, about AI be taken in the most inclusive possible ways. And for that reason, we need not just to think about whats going on in the U.S. or Canada, but across the world. We need not just to think about the risks of AI that weve been discussing today, but also how do we actually invest more in areas of application where companies are not going, maybe because its not profitable, but that are really important to address for example, the U.N. Sustainable Development Goals and help reduce misery and deal, for example, with medical issues that are not present in the West but that are like infectious diseases that are mostly in poorer countries.

AMY GOODMAN: And can you talk, Professor Bengio, about AI and not only nuclear war but, for example, the issue Jody Williams, the Nobel laureate, has been trying to bring attention to for years, killer robots, that can kill with their bare hands? The whole issue of AI when it comes to war and who fights

YOSHUA BENGIO: Yeah.

AMY GOODMAN: these wars?

YOSHUA BENGIO: Yeah. This is also something Ive been actively involved in for many years, campaigns to raise awareness about the danger of killer robots, also known, more precisely, as lethal autonomous weapons. And when we did this, you know, five or 10 years ago, it was still something that sounded like science fiction. But, actually, theres been reports that drones have been equipped with AI capabilities, especially computer vision capabilities, face recognition, that have been used in the field in Syria, and maybe this is happening in Ukraine. So, its already something that we know how to build. Like, we know like the science behind building these killer drones not killer robots. We dont know yet how to build robots that work really well.

But if you take drones, that we know how to fly in a fairly autonomous way, and if these drones have weapons on them, and if these drones have cameras, then AI could be used to target the drone to specific people and kill in an illegal way specific targets. Thats incredibly dangerous. It could destabilize the sort of military balance that we know today. I dont think that people are paying enough attention to that.

And in terms of the existential risk, the real issue here is that if the superintelligent AI also has controls of dangerous weapons, then its just going to be very difficult for us to reduce the risks of, you know, the catastrophic risks. We dont want to put guns in the hands of people who are, you know, unstable or in the hands of children, that could act in ways that could be dangerous. And thats the same problem here.

NERMEEN SHAIKH: Professor Tegmark, if you could respond on this question of the military uses of possible military uses of artificial intelligence, and the fact, for instance, that China is now a Nikkei study, the Japanese publication study, earlier this year concluded that, in fact, China is producing more research papers on artificial intelligence than the U.S. is. Youve said, of course, that this is not akin to an arms race, but rather to a suicide race. So, if you could talk about the regulations that are already in place from the Chinese government on the applications of artificial intelligence, compared to the EU and the U.S.?

MAX TEGMARK: Thats a great question. The recent change now, this week, when the idea of extinction from AI goes mainstream, I think, will actually help the geopolitical rivalry between East and West get more harmonious, because, until now, most policymakers have just viewed AI as something that gives you great power, so everybody wanted it first. And there was this idea that whoever gets artificial general intelligence that can outsmart humans somehow wins. But now that its going mainstream, the idea that, actually, it could easily end up with everybody just losing, and the big winners are the machines that are left over after were all extinct, it suddenly gives the incentives to the Chinese government and the American government and European governments that are aligned, because the Chinese government does not want to lose control over its society any more than any Western government does.

And for this reason, we can actually see that China has already put tougher restrictions on their own tech companies than we in America have on American companies. And its not because we so we dont have to persuade the Chinese, in other words, to take precautions, because its not in their interest to go extinct. You know, it doesnt matter if youre American or Canadian [inaudible], once youre extinct.

AMY GOODMAN: I know, Professor

MAX TEGMARK: And I should add also, just so it doesnt sound like hyperbole, this idea of extinction, that idea that everybody on Earth could die, its important to remember that roughly half the species on this planet that were here, you know, a thousand, a few thousand years ago have been driven extinct already by humans, right? So, extinction happens.

And its also important to remember why we drove all these other species extinct. It wasnt because necessarily we hated the West African black rhinoceros or certain species that lived in coral reefs. You know, when we went ahead and just chopped down the rainforests or ruined the coral reefs by climate change, that was kind of a side effect. We just wanted resources. We had other goals that just didnt align with the goals of those other species. Because we were more intelligent than them, they were powerless to stop us.

This is exactly what Yoshua Bengio was warning about also for humanity here. If we lose control of our planet to more intelligent entities and their goals are just not aligned with ours, we will be powerless to prevent massive changes that they might do to our biosphere here on Earth. And thats the way in which we might get wiped out, the same way that the other half of the species did. And lets not do that.

Theres so much goodness, so much wonderful stuff that AI can do for all of us, if we work together to harness, steer this in a good direction curing all those diseases that have stumped us, lifting people out of poverty, stabilizing the climate, and helping life on Earth flourish for a very, very, very long time to come. I hope that by raising the awareness of the risks, were going to get to work together to build that great future with AI.

AMY GOODMAN: And finally, Tawana Petty, moving from the global to the local, were here in New York, and the New York City Mayor Eric Adams has announced the New York Police Department is acquiring some new semi-autonomous robotic dogs in the coming in this period. You have looked particularly about their use and their discriminatory use in communities of color. Can you respond?

TAWANA PETTY: Yes, and Ill also say that Ferndale, Michigan, Michigan where I live, has also acquired robot dogs. And so, these are situations that are currently happening on the ground, and an organization, law enforcement, that is still suffering from systemic racial bias with overpoliced and hypersurveilled marginalized communities. So were looking at these robots now being given the opportunity to police and surveil already hypersurveilled communities.

And, Amy, I would just like an opportunity to address really briefly the previous comments. My commentary is not to attack any of the existing efforts or previous efforts or years worth of work that these two gentlemen have been involved in. I greatly respect efforts to address racial inequity and ethics in artificial intelligence. And I agree that we need to have some collaborative efforts in order to address these existing things that were experiencing. People are already dying from health discrimination with algorithms. People are already being misidentified by police using facial recognition. Government services are utilizing corporations like ID.me to use facial recognition to access benefits. And so, we have a lot of opportunities to collaborate currently to prevent the existing threats that were currently facing.

AMY GOODMAN: Well, Tawana Petty, I want to thank you for being with us, director of policy and advocacy at the Algorithmic Justice League, speaking to us from Detroit; Yoshua Bengio, founder and scientific director of Milathe Quebec AI Institute, considered one of the godfathers of AI, speaking to us from Montreal; and Max Tegmark, MIT professor. Well link to your Time magazine piece, The Dont Look Up Thinking That Could Doom Us With AI. We thank you all for being with us.

Coming up, we look at student debt as the House approves a bipartisan deal to suspend the debt ceiling. Back in 20 seconds.

Read more from the original source:

Artificial Intelligence Godfathers Call for Regulation as Rights ... - Democracy Now!

Related Posts

Comments are closed.