Media Search:



James Cameron’s ‘Avatar’ scores wacky new Wes Anderson … – Space.com

Director Wes Anderson's resume of wildly original and uniquely cinematic fare has spawned a fad of fan-made AI trailers.

These amateur works target the quirky filmmaker's recognizable style as seen in movies like "Rushmore," "The Royal Tenenbaums," "The Life Aquatic With Steve Zissou," "Isle of Dogs," "Moonrise Kingdom," "The Grand Budapest Hotel" and his new sci-fi flick, "Asteroid City."

Fresh off of their "Star Wars: The Galactic Menagerie" parody video cleverly employing cutting-edge AI tools, the creative folks at the YouTube channel Curious Refuge have turned their attention to director James Cameron's blockbuster film "Avatar," and the results offer a sly lampoon of Anderson's trademark visual trickery, brash color palette, singular wit and symmetrical camera framing.

This latest alt-history teaser presents a timeline in which Anderson helmed "The Peculiar Pandora Expedition: An Avatar Story," a madcap sci-fi odyssey to that lush tropical moon packed with an eccentric ensemble cast of Anderson regulars like Bill Murray, Timothe Chalamet, Adrien Brody, Tilda Swinton, Luke Wilson, Owen Wilson, Angelica Huston, Gwyneth Paltrow, Jason Schwartzman and Willem Dafoe.

Related: 1st 'Asteroid City' trailer reveals Wes Anderson's take on a space-age alien encounter

Here's the creators' official description:

Embark on a captivating journey to the enchanting world of Pandora, reimagined through the unique and imaginative lens of Wes Anderson in "The Peculiar Pandora Expedition." This extraordinary fan-made trailer offers a fresh perspective on James Cameron's epic masterpiece, blending Anderson's distinctive style with the awe-inspiring landscapes and extraordinary creatures of Pandora.

Follow Jake Sully, a former Marine, as he ventures into this mesmerizing land alongside the strong-willed Neytiri. Together, they discover the peculiar wonders, vibrant colors, and extraterrestrial flirtations that define Pandora. With Anderson's keen eye for detail and storytelling, he brings a human touch to this eccentric world, giving us a fresh and captivating take on the Na'vi and their mystical environment.

Experience the breathtaking beauty of Pandora, from its majestic floating mountains to its lush and diverse flora. Marvel at the unique creatures that inhabit this world, as Jake's journey uncovers secrets and challenges him to choose between his own kind and the people he has come to love.

"The Peculiar Pandora Expedition" is a testament to Anderson's visionary mind and meticulous craftsmanship, delivering an adventure filled with passion, vibrant colors, and thought-provoking themes. Join us as we celebrate the magic of imagination and the power of cinematic storytelling.

Other AI-spawned homage videos still making the cyberspace rounds are trailers for supposed Wes Anderson versions of "The Lord of the Rings," "The Hunger Games," "The Shining," "Gremlins" and "Harry Potter."

This newest "Avatar" creation might be the most esoteric of the whole bunch. But the trend is getting a bit repetitious and long in the tooth, which could be precisely what these hyped digital art offerings are attempting to point out.

Cameron's second "Avatar" film, "Avatar: The Way of Water," splashes onto Disney+ and Max beginning on Wednesday (June 7).

Today's best Disney+ and Disney+ Bundle deals

Read the original:

James Cameron's 'Avatar' scores wacky new Wes Anderson ... - Space.com

Grimes used AI to clone her own voice. We cloned the voice of a … – NPR

Grimes used AI to clone her voice. We cloned the voice of a Planet Money host. : Planet Money In Part 1 of this series, AI proved that it could use real research and real interviews to write an original script for an episode of Planet Money.

Our next task was to teach the computer how to sound like us. How to read that script aloud like a Planet Money host.

On today's show, we explore the world of AI-generated voices, which have become so lifelike in recent years that they can credibly imitate specific people. To test the limits of the technology, we attempt to create our own synthetic voice by training a computer on recordings of former Planet Money host Robert Smith. Then we introduce synthetic Robert to his very human namesake.

There are a lot of ethical, and economic, questions raised by a technology that can duplicate anyone's voice. To help us make sense of it all, we seek the advice of an artist who has embraced AI voice clones: the musician Grimes.

(This is part two of a three-part series. For part one of our series, click here)

This episode was produced by Emma Peaslee and Willa Rubin, with help from Sam Yellowhorse Kesler. It was edited by Keith Romer and fact-checked by Sierra Juarez. Engineering by James Willetts. Jess Jiang is our acting executive producer.

We built a Planet Money AI chat bot. Help us test it out: Planetmoneybot.com.

Help support Planet Money and get bonus episodes by subscribing to Planet Money+ in Apple Podcasts or at plus.npr.org/planetmoney.

Keystone Features/Getty Images

In Part 1 of this series, AI proved that it could use real research and real interviews to write an original script for an episode of Planet Money.

Our next task was to teach the computer how to sound like us. How to read that script aloud like a Planet Money host.

On today's show, we explore the world of AI-generated voices, which have become so lifelike in recent years that they can credibly imitate specific people. To test the limits of the technology, we attempt to create our own synthetic voice by training a computer on recordings of former Planet Money host Robert Smith. Then we introduce synthetic Robert to his very human namesake.

There are a lot of ethical, and economic, questions raised by a technology that can duplicate anyone's voice. To help us make sense of it all, we seek the advice of an artist who has embraced AI voice clones: the musician Grimes.

(This is part two of a three-part series. For part one of our series, click here)

This episode was produced by Emma Peaslee and Willa Rubin, with help from Sam Yellowhorse Kesler. It was edited by Keith Romer and fact-checked by Sierra Juarez. Engineering by James Willetts. Jess Jiang is our acting executive producer.

We built a Planet Money AI chat bot. Help us test it out: Planetmoneybot.com.

Help support Planet Money and get bonus episodes by subscribing to Planet Money+ in Apple Podcasts or at plus.npr.org/planetmoney.

Always free at these links: Apple Podcasts, Spotify, Google Podcasts, NPR One or anywhere you get podcasts.

Find more Planet Money: Facebook / Instagram / TikTok / Our weekly Newsletter.

Music: "Hi-Tech Expert," "Lemons and Limes," and "Synergy in Numbers."

Go here to read the rest:

Grimes used AI to clone her own voice. We cloned the voice of a ... - NPR

Artificial Intelligence Godfathers Call for Regulation as Rights … – Democracy Now!

This is a rush transcript. Copy may not be in its final form.

AMY GOODMAN: This is Democracy Now!, democracynow.org, The War and Peace Report. Im Amy Goodman, with Nermeen Shaikh.

We begin todays show looking at growing alarm over the potential for artificial intelligence to lead to the extinction of humanity. The latest warning comes from hundreds of artificial intelligence, or AI, experts, as well as tech executives, scholars and others, like climate activist Bill McKibben, who signed onto an ominous, one-line statement released Tuesday that reads, quote, Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Among the signatories to the letter, released by the Center for AI Safety, is Geoffrey Hinton, considered one of three godfathers of AI. He recently quit Google so he could speak freely about the dangers of the technology he helped build, such as artificial general intelligence, or AGI, in which machines could develop cognitive abilities akin or superior to humans sooner than previously thought.

GEOFFREY HINTON: I had always assumed that the brain was better than the computer models we had. And Id always assumed that by making the computer models more like the brain, we would improve them. And my epiphany was, a couple of months ago, I suddenly realized that maybe the computer models we have now are actually better than the brain. And if thats the case, then maybe quite soon theyll be better than us, so that the idea of superintelligence, instead of being something in the distant future, might come much sooner than I expected.

For the existential threat, the idea it might wipe us all out, thats like nuclear weapons, because nuclear weapons have the possibility they would just wipe out everybody. And thats why people could cooperate on preventing that. And for the existential threat, I think maybe the U.S. and China and Europe and Japan can all cooperate on trying to avoid that existential threat. But the question is: How should they do that? And I think stopping development is infeasible.

AMY GOODMAN: Many have called for a pause on introducing new AI technology until strong government regulation and a global regulatory framework are in place.

Joining Hinton in signing the letter was a second AI godfather, Yoshua Bengio, who joins us now for more. Hes a professor at the University of Montreal, founder and scientific director of the Milathe Quebec Artificial Intelligence Institute. In 2018, he shared the prestigious computer science prize, the Turing Award, with Geoffrey Hinton and Yann LeCun.

Professor Bengio is a signatory of the Future of Life Institute open letter calling for a pause on large AI experiments.

Professor Bengio, welcome to Democracy Now! Its great to have you with us as we talk about an issue that I think most people cannot begin to comprehend. So, if you could start off by talking about why youve signed this letter warning of extinction of humanity? But talk about what AI is, first.

YOSHUA BENGIO: Well, thanks for having me, first. And thanks for talking about this complicated issue that requires more awareness.

The reason I signed this and like Geoff, I changed my mind in the last few months. What triggered this change for me is interacting with ChatGPT and seeing how far we had moved, much faster than I anticipated. So, I used to think that reaching human-level intelligence with machines could take many more decades, if not centuries, because the progress of science seemed to be, well, slow. And we were as researchers, we tend to focus on what doesnt work. But right now we have machines that pass what is called the Turing test, which means they can converse with us, and they could easily fool us as being humans. That was supposed to be a milestone for, you know, human-level intelligence.

I think theyre still missing a few things, but that kind of technology could already be dangerous to destabilize democracy through disinformation, for example. But because of the research that is currently going on to bridge the gap with what is missing from current large language models, large AI systems, it is possible that my, you know, horizon that I was seeing as many decades in the future is just a few years in the future. And that could be very dangerous. It suffices that just a small organization or somebody with crazy beliefs, conspiracy theory, terrorists, a military organization decides to use this without the right safety mechanisms, and it could be catastrophic for humanity.

NERMEEN SHAIKH: So, Professor Yoshua Bengio, it would be accurate then to say that the reason artificial intelligence and concerns about artificial intelligence have become the center of public discussion in a way that theyve not previously been, because the advances that have occurred in the field have surprised even those who are participating in it and the lead researchers in it. So, if you could elaborate on the question of superintelligence, and especially the concerns that have been raised about unaligned superintelligence, and also the speed at which we are likely to get to unaligned superintelligence?

YOSHUA BENGIO: Yeah. I mean, the reason it was surprising is that in the current systems, from a scientific perspective, the methods that are used are not very different from the things we only knew just a few years ago. Its the scale at which they have been built, the amount of data, the amount of engineering, that has made this really surprising progress possible. And so we could have similar progress in the future because of the scale of things.

Now, the problem first of all, you know, theres an important why do we why are we concerned about superintelligence? So, first of all, the question is: Is it even possible to build machines that will be smarter than us? And the consensus in the scientific community, for example, from the neuroscience perspective, is that our brain is a very complicated machine, so theres no reason to think that, in principle, we couldnt build machines that would be at least as smart as us. Now, then theres the question of how long its going to take. But weve discussed that. In addition, as Geoff Hinton was saying in the piece that was heard, computers have advantages that brains dont have. For example, they can talk to each other at very, very high speed and exchange information. For us, we are limited by the very few bits of information per second that language allows us to do. And that actually gives them a huge advantage to learn a lot faster. So, for example, these systems today already can read the whole internet very, very quickly, whereas a human would require 10,000 years of their life reading all the time to achieve the same thing. So, they can have access to information and sharing of information in ways that humans dont. So its very likely that as we make progress towards understanding the principles behind human intelligence, we will be able to build machines that are actually smarter than us.

So, why is it dangerous? Because if theyre smarter than us, they might act in ways that are not that do not agree with what we intend, what we want them to do. And it could be for several reasons, but this question of alignment is that its actually very difficult to state to instruct a machine to behave in a way that agrees with our values, our needs and so on. We can say it in language, but it might be understood in a different way, and that can lead to catastrophes, as has been argued many times.

But this is something that already happens I mean, this alignment problem already happens. So, for example, you can think of corporations not being quite aligned with what society wants. Society would like corporations to provide useful goods and services, but we cant, like, dictate that to corporations directly. Instead, weve given them a framework where they maximize profit under the constraints of laws, and that may work reasonably but also have side effects. For example, corporations can find loopholes in those laws, or, even worse, they could influence the laws themselves.

And this sort of thing can happen with AI systems that were trying to control. They might find ways to satisfy the letter of our instructions, but not the intention, the spirit of the law. And thats very scary. We dont fully understand how these scenarios can unfold, but theres enough danger and enough uncertainty that I think a lot of attention more attention should be given to these questions.

NERMEEN SHAIKH: If you could explain whether you think it will be difficult to regulate this industry, artificial intelligence, despite all of the advances that have already occurred? How difficult will regulation be?

YOSHUA BENGIO: Even if something seems difficult, like dealing with climate change, and even if we feel that its a hard task to do the job and to convince enough people and society to change in the right ways, we have a moral duty to try our best.

And the first things we have to do with AI risks is get on with regulation, set up governance frameworks, both in individual countries and internationally. And when we do that, its going to be useful for all the AI risks because weve been talking a lot about the extinction risk, but there are other risks that are shorter-term, risks to destabilize democracy. If democracy is destabilized, this is bad in itself, but it actually is going to also hurt in our abilities to fight to deal with the existential risk.

And then there are other risks that are actually going on with AI discrimination, bias, privacy and so on. So we need to beef up that legislative and regulatory body. And what we need there is a regulatory framework thats going to be very adaptive, because theres a lot of unknown. Its not like we know precisely how bad things can happen. We need to do a lot more in terms of monitoring, validating, and we need and controlling access so that not any bad actor can easily get their hands on dangerous technologies. And we need the body that will regulate, or the bodies across the world, to be able to change their rules as new nefarious users show up or as technology advances. And thats a challenge, but I think we need to go in that direction.

AMY GOODMAN: I want to bring Max Tegmark into the conversation. Max Tegmark is MIT professor focused on artificial intelligence, his recent Time magazine article, The Dont Look Up Thinking That Could Doom Us With AI.

If you could explain that point, Professor Tegmark?

MAX TEGMARK: Yes.

AMY GOODMAN: And also, why you think right now you know, many people have just heard the term ChatGPT for the first time in the last months. The general public has become aware of this. And how you think it is most effective to regulate AI technology?

MAX TEGMARK: Yeah. Thank you for the great question.

I wrote this piece comparing whats happening now in AI with the movie Dont Look Up, because I really [inaudible] were all living this film. Were, as a species, confronting the most dramatic thing that has ever happened to us, where we may be losing control over our future, and almost no one is talking about it. So Im so grateful to you and others for actually starting to have that conversation now. And thats, of course, why we had these open letters that you just referred to here, to really help mainstream this conversation that we have to have, that people previously used to make fun of you when you even brought up the idea that we could actually lose control of this and go extinct, or example.

NERMEEN SHAIKH: Professor Tegmark, youve drawn analogies, in fact, when it comes to regulation, with the regulations that were put in place on biotech and physics. So, could you explain how that might apply to artificial intelligence?

MAX TEGMARK: Yeah. To appreciate what a huge deal this is, when the top scientists in AI are warning about extinction, its good to compare with the other two times in history that its happened, that leading scientists warned about the very thing they were making. It happened once in the 1940s, when physicists started warning about nuclear Armageddon, and it happened again in the early 1970s with biologists saying, Hey, maybe we shouldnt start making clones of humans and edit the DNA of our babies.

And the biologists have been the big success story here, I think, that should inspire us AI researchers today, because it was deemed so risky that we would lose control over our species back in the '70s that we actually decided as a world society to not do human cloning and to not edit the DNA of our offspring. And here we are with a really flourishing biotech industry that's doing so much good in the world.

And so, the lesson here for AI is that we should become more like biology. We should recognize that, in biology, no company has the right to just launch a new medicine and start selling it in supermarkets without first convincing experts from the government that this is safe. Thats why we have the Food and Drug Administration in the U.S., for example. And with particularly high-risk uses of AI, we should aspire to something very similar, where the onus is really on the companies to prove that something extremely powerful is safe, before it gets deployed.

AMY GOODMAN: Last fall, the White House Office of Science and Technology Policy published a Blueprint for an AI Bill of Rights and called it A Vision for Protecting Our Civil Rights in the Algorithmic Age. This comes amidst growing awareness about racial biases embedded in artificial intelligence and how impacts the use of facial recognition programs by law enforcement and more. I want to bring into this conversation, with professors Tegmark and Bengio, Tawana Petty, director of policy and advocacy at the Algorithm Justice League, longtime digital and data rights activist.

Tawana Petty, welcome to Democracy Now! You are not only warning people about the future; youre talking about the uses of AI right now and how they can be racially discriminatory. Can you explain?

TAWANA PETTY: Yes. Thank you for having me, Amy. Absolutely.

I must say that the contradictions have been heightened with the godfather of AI and others speaking out and authoring these particular letters that are talking about these futuristic potential harms. However, many women have been warning about the existing harms of artificial intelligence many years prior to now Timnit Gebru, Dr. Joy Buolamwini and so many others, Safiya Noble, Ruha Benjamin, and so and Dr. Alondra Nelson, what you just mentioned, the Blueprint for an AI Bill of Rights, which is asking for five core principles: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives consideration and fallback.

And so, at the Algorithmic Justice League, we have been responding to existing harms of algorithmic discrimination that date back many years prior to this all most robust narrative-reshaping conversation that has been happening over the last several months with artificial general intelligence. So, were already seeing harms with algorithmic discrimination in medicine. Were seeing the pervasive surveillance that is happening with law enforcement using face detection system to target community members during protests, squashing not only our civil liberties and rights to organize and protest, but also the misidentifications that are happening with regard to false arrests, that weve seen two very prominent cases started off in Detroit.

And so, there are many examples of existing harms that it would have been really great to have these voices of mostly white men who are in the tech industry, who did not pay attention to the voices of all those women who were lifting up these issues many years ago. And theyre talking about these futuristic possible risks, when we have so many risks that are happening today.

NERMEEN SHAIKH: So, Professor Max Tegmark, if you could respond to what Tawana Petty said, and the fact that others have also said that the risks have been vastly overstated in that letter, and, more importantly, given what Tawana has said, that it distracts from already-existing effects of artificial intelligence that are widely in use already?

MAX TEGMARK: I think this is a really important question here. There are people who say that one of these kinds of risks distracts from the other. I strongly support everything we heard here from Tawana. I think these are all very important problems, examples of how were giving too much control already to machines. But I strongly disagree that we should have to choose about worrying about one kind of risk or the other. Thats like saying we should stop working on cancer prevention because it distracts from stroke prevention.

These are all incredibly important risks. I have spoken up a lot on social justice risks, as well, and threats. And, you know, it just plays into the hands of the tech lobbyists, if they can if it looks like theres infighting between people who are trying to rein in Big Tech for one reason and people who are trying to rein in Big Tech for other reasons. Lets all work together and realize that society just like society can work on both cancer prevention and stroke prevention. We have the resources for this. We should be able to deal with all the crucial social justice issues and also make sure that we dont go extinct.

Extinction is not something in the very distant future, as we heard from Yoshua Bengio. We might be losing total control of our society relatively soon. It can happen in the next few years. It could happen in a decade. And once were all extinct, you know, all these other issues cease to even matter. Lets work together, tackle all the issues, so that we can actually have a good future for everybody.

AMY GOODMAN: So, Tawana Petty, and then I want to bring back in Yoshua Bengio Tawana Petty, what needs to happen at the national level, you know, U.S. regulation? And then I want to compare whats happening here, whats happening in Canadian regulation, the EU, European Union, which seems like its about to put in the first comprehensive set of regulations, Tawana.

TAWANA PETTY: Right, absolutely. So, the blueprint was a good model to start with, that were seeing some states adopt and try to roll out their versions of an AI Bill of Rights. The president issued an executive order to strengthen racial equity and support underserved communities across the federal government, which is addressing specifically algorithmic discrimination. You have the National Institute of Standards and Technology that issued an AI risk management framework, that breaks down the various types of biases that we find within algorithmic systems, like computational, systemic, statistical and human cognitive.

And there are so many other legislative opportunities that are happening on the federal level. You see the FTC speaking up, the Federal Trade Commission, on algorithmic discrimination. You have the Equal Employment Opportunity Corporation that has issued statements. You have the Consumer Financial Protection Bureau, who has been adamant about the impact that algorithmic systems have on us when data brokers are amassing these mass amounts of data that have been extracted from community members.

So, I agree that there needs to be some collaboration and cooperation, but weve seen situations like Dr. Timnit Gebru was terminated from Google for warning us before ChatGPT was launched upon the millions of people as a large language model. And so, cooperation has not been lacking on the side of the folks who work in ethics. To the contrary, these companies have terminated their ethics departments and people who have been warning about existing harms.

AMY GOODMAN: And, Professor Bengio, if you can talk about the level of regulation and what you think needs to happen, and who is putting forward models that you think could be effective?

YOSHUA BENGIO: So, first of all, Id like to make a correction here. I have been involved in really working towards dealing with the negative social impact of AI for many years. In 2016, I worked on the Montreal Declaration for the Responsible Development of AI, which is very much centered on ethics and social injustice. And since then, Ive created an organization, the AI for Humanity department, in the research center that I head, which is completely focused on human rights. So, I think these accusations are just false.

And as Max was saying, we dont need to choose between fighting cancer and fighting heart disease. We need to do all of those things. But better than that, what is needed in the short term, at least, building up these regulations is going to help to mitigate all those risks. So I think we should really work together rather than having these accusations.

NERMEEN SHAIKH: Professor Bengio, Id like to ask you about precisely some of the work that you have done with respect to human rights and artificial intelligence. Earlier this month, a conference on artificial intelligence was held in Kigali, Rwanda, and you were among those who were pushing for the conference to take place in Africa.

YOSHUA BENGIO: Thats right.

NERMEEN SHAIKH: Could you explain what happened at that conference 2,000 people, I believe, attended and what African researchers and scientists had to say, you know, about what the goods are, the public good that could come from artificial intelligence, and why they felt, in fact one of the questions that was raised is: Why wasnt there more discussion about the public good, rather than just the immediate risks or future risks?

YOSHUA BENGIO: Yes. Ive been working in addition to the ethics questions, Ive been working a lot on the applications of AI in the area of whats called AI for social good. So, that includes things like medical applications, environmental applications, social justice applications. And in those areas, it is particularly important that we bring to the fore the voices of the people who could the most benefit and also the most suffer from the development of AI. And in particular, the voices of Africans have not been very present. As we know, the development of this technology has been mostly in rich countries in the West.

And so, as a member of the board of the ICLR conference, which is one of the main conferences in the field, Ive been pushing for many years for us to have the event taking place in Africa. And so, this year was the first, after Amy, it was supposed to be before the pandemic, but, well, it was pushed. And what we saw is an amazing presence of African researchers and students at levels that we couldnt see before.

And the reason I mean, there are many reasons, but mostly its a question of accessibility. Currently, many Western countries, the visas for African researchers or from developing countries are very difficult to get. Ive been fighting, for example, the Canadian government a few years ago, when we had the NeurIPS conference in Canada, and there were hundreds of African researchers who were denied a visa, and we had to go one by one in order to try to make them come.

So, I think that its important that the decisions were going to take collectively, which involve everyone on Earth, about AI be taken in the most inclusive possible ways. And for that reason, we need not just to think about whats going on in the U.S. or Canada, but across the world. We need not just to think about the risks of AI that weve been discussing today, but also how do we actually invest more in areas of application where companies are not going, maybe because its not profitable, but that are really important to address for example, the U.N. Sustainable Development Goals and help reduce misery and deal, for example, with medical issues that are not present in the West but that are like infectious diseases that are mostly in poorer countries.

AMY GOODMAN: And can you talk, Professor Bengio, about AI and not only nuclear war but, for example, the issue Jody Williams, the Nobel laureate, has been trying to bring attention to for years, killer robots, that can kill with their bare hands? The whole issue of AI when it comes to war and who fights

YOSHUA BENGIO: Yeah.

AMY GOODMAN: these wars?

YOSHUA BENGIO: Yeah. This is also something Ive been actively involved in for many years, campaigns to raise awareness about the danger of killer robots, also known, more precisely, as lethal autonomous weapons. And when we did this, you know, five or 10 years ago, it was still something that sounded like science fiction. But, actually, theres been reports that drones have been equipped with AI capabilities, especially computer vision capabilities, face recognition, that have been used in the field in Syria, and maybe this is happening in Ukraine. So, its already something that we know how to build. Like, we know like the science behind building these killer drones not killer robots. We dont know yet how to build robots that work really well.

But if you take drones, that we know how to fly in a fairly autonomous way, and if these drones have weapons on them, and if these drones have cameras, then AI could be used to target the drone to specific people and kill in an illegal way specific targets. Thats incredibly dangerous. It could destabilize the sort of military balance that we know today. I dont think that people are paying enough attention to that.

And in terms of the existential risk, the real issue here is that if the superintelligent AI also has controls of dangerous weapons, then its just going to be very difficult for us to reduce the risks of, you know, the catastrophic risks. We dont want to put guns in the hands of people who are, you know, unstable or in the hands of children, that could act in ways that could be dangerous. And thats the same problem here.

NERMEEN SHAIKH: Professor Tegmark, if you could respond on this question of the military uses of possible military uses of artificial intelligence, and the fact, for instance, that China is now a Nikkei study, the Japanese publication study, earlier this year concluded that, in fact, China is producing more research papers on artificial intelligence than the U.S. is. Youve said, of course, that this is not akin to an arms race, but rather to a suicide race. So, if you could talk about the regulations that are already in place from the Chinese government on the applications of artificial intelligence, compared to the EU and the U.S.?

MAX TEGMARK: Thats a great question. The recent change now, this week, when the idea of extinction from AI goes mainstream, I think, will actually help the geopolitical rivalry between East and West get more harmonious, because, until now, most policymakers have just viewed AI as something that gives you great power, so everybody wanted it first. And there was this idea that whoever gets artificial general intelligence that can outsmart humans somehow wins. But now that its going mainstream, the idea that, actually, it could easily end up with everybody just losing, and the big winners are the machines that are left over after were all extinct, it suddenly gives the incentives to the Chinese government and the American government and European governments that are aligned, because the Chinese government does not want to lose control over its society any more than any Western government does.

And for this reason, we can actually see that China has already put tougher restrictions on their own tech companies than we in America have on American companies. And its not because we so we dont have to persuade the Chinese, in other words, to take precautions, because its not in their interest to go extinct. You know, it doesnt matter if youre American or Canadian [inaudible], once youre extinct.

AMY GOODMAN: I know, Professor

MAX TEGMARK: And I should add also, just so it doesnt sound like hyperbole, this idea of extinction, that idea that everybody on Earth could die, its important to remember that roughly half the species on this planet that were here, you know, a thousand, a few thousand years ago have been driven extinct already by humans, right? So, extinction happens.

And its also important to remember why we drove all these other species extinct. It wasnt because necessarily we hated the West African black rhinoceros or certain species that lived in coral reefs. You know, when we went ahead and just chopped down the rainforests or ruined the coral reefs by climate change, that was kind of a side effect. We just wanted resources. We had other goals that just didnt align with the goals of those other species. Because we were more intelligent than them, they were powerless to stop us.

This is exactly what Yoshua Bengio was warning about also for humanity here. If we lose control of our planet to more intelligent entities and their goals are just not aligned with ours, we will be powerless to prevent massive changes that they might do to our biosphere here on Earth. And thats the way in which we might get wiped out, the same way that the other half of the species did. And lets not do that.

Theres so much goodness, so much wonderful stuff that AI can do for all of us, if we work together to harness, steer this in a good direction curing all those diseases that have stumped us, lifting people out of poverty, stabilizing the climate, and helping life on Earth flourish for a very, very, very long time to come. I hope that by raising the awareness of the risks, were going to get to work together to build that great future with AI.

AMY GOODMAN: And finally, Tawana Petty, moving from the global to the local, were here in New York, and the New York City Mayor Eric Adams has announced the New York Police Department is acquiring some new semi-autonomous robotic dogs in the coming in this period. You have looked particularly about their use and their discriminatory use in communities of color. Can you respond?

TAWANA PETTY: Yes, and Ill also say that Ferndale, Michigan, Michigan where I live, has also acquired robot dogs. And so, these are situations that are currently happening on the ground, and an organization, law enforcement, that is still suffering from systemic racial bias with overpoliced and hypersurveilled marginalized communities. So were looking at these robots now being given the opportunity to police and surveil already hypersurveilled communities.

And, Amy, I would just like an opportunity to address really briefly the previous comments. My commentary is not to attack any of the existing efforts or previous efforts or years worth of work that these two gentlemen have been involved in. I greatly respect efforts to address racial inequity and ethics in artificial intelligence. And I agree that we need to have some collaborative efforts in order to address these existing things that were experiencing. People are already dying from health discrimination with algorithms. People are already being misidentified by police using facial recognition. Government services are utilizing corporations like ID.me to use facial recognition to access benefits. And so, we have a lot of opportunities to collaborate currently to prevent the existing threats that were currently facing.

AMY GOODMAN: Well, Tawana Petty, I want to thank you for being with us, director of policy and advocacy at the Algorithmic Justice League, speaking to us from Detroit; Yoshua Bengio, founder and scientific director of Milathe Quebec AI Institute, considered one of the godfathers of AI, speaking to us from Montreal; and Max Tegmark, MIT professor. Well link to your Time magazine piece, The Dont Look Up Thinking That Could Doom Us With AI. We thank you all for being with us.

Coming up, we look at student debt as the House approves a bipartisan deal to suspend the debt ceiling. Back in 20 seconds.

Read more from the original source:

Artificial Intelligence Godfathers Call for Regulation as Rights ... - Democracy Now!

Google’s AI-powered search experience is way too slow – The Verge

The worst thing about Googles new AI-powered search experience is how long you have to wait.

Can you think of the last time you waited for a Google Search result? For me, searches are generally instant. You type a thing in the search box, Google almost immediately spits out an answer to that thing, and then you can click some links to learn more about what you searched for or type something else into the box. Its a virtuous, useful cycle that has turned Google Search into the most visited website in the world.

Googles Search Generative Experience, on the other hand, has loading animations.

Let me back up a little. In May, Google introduced an experimental feature called Search Generative Experience (SGE) that uses Googles AI systems to summarize search results for you. The idea is that you wont have to click through a list of links or type something else in the search box; instead, Google will just tell you what youre looking for. In theory, that means your search queries can be more complex and conversational a pitch weve heard before! but Google will still be able to answer your questions.

If youve opted in to SGE, which is only available to people who sign up for Googles waitlist on its Search Labs, AI summaries will appear right under the search box. Ive been using SGE for a few days, and Ive found the responses themselves have been generally fine, if cluttered. For example, when I searched where can I watch Ted Lasso? the AI-generated response that appeared was a few sentences long and factually accurate. Its on Apple TV Plus. Apple TV Plus costs $6.99 per month. Great.

Screenshot by Jay Peters / The Verge

But the answers are often augmented with a bunch of extra stuff. On desktop, Google displays source information as cards on the right, even though you cant easily tell which pieces of information come from which sources (another button can help you with that). On mobile (well, only the Google app for now), the cards appear below the summarized text. Below the query response, you can click a series of potential follow-up prompts, and under all of that is a standard Google search result, which can be littered with additional info boxes.

That extra stuff in an SGE result isnt quite as helpful as it should be, either. When it showed off SGE at I/O, Google also showed how the tool could auto-generate a buying guide on the fly, so I thought where can I buy Tears of the Kingdom? would be a softball question. But the result was a mess, littered with giant sponsored cards above the result, a confusing list of suggested retail stores that didnt actually take me to listings for the game, a Google Map pinpointing those retail stores, and off to the right, three link cards where I could find my way to buying the game. A search for a used iPhone 13 Mini in red didnt go much better. I should have just scrolled down.

An increasingly cluttered search screen isnt exactly new territory for Google. What bothers me most about SGE is that its summaries take a few seconds to show up. As Google is generating an answer to your query, an empty colored box will appear, with loading bars fading in and out. When the search result finally loads, the colored box expands and Googles summary pops in, pushing the list of links down the page. I really dont like waiting for this; if I werent testing specifically for this article, for many of my searches, Id be immediately scrolling away from most generative AI responses so I could click on a link.

Confusingly, SGE broke down for me at weird times, even with some of the top-searched terms. The words YouTube, Amazon, Wordle, Twitter, and Roblox, for example, all returned an error message: An AI-powered overview is not available for this search. Facebook, Gmail, Apple, and Netflix, on the other hand, all came back with perfectly fine SGE-formatted answers. But for the queries that were valid, the results took what felt like forever to show up.

When I was testing, the Gmail result showed up fastest, in about two seconds. Netflixs and Facebooks took about three and a half seconds, while Apples took about five seconds. But for these single-word queries that failed, they all took more than five seconds to try and load before showing the error message, which was incredibly frustrating when I could have just scrolled down to click a link. The Tears of the Kingdom and iPhone 13 Mini queries both took more than six seconds to load an internet eternity!

When I have to wait that long when Im not specifically doing test queries, I just scroll down past the SGE results to get to something to read or click on. And when I have to tap my foot to wait for SGE answers that are often filled with cruft that I dont want to sift through, its all just making the search experience worse for me.

Maybe Im just stuck in my ways. I like to investigate sources for myself, and Im generally distrustful of the things AI tools say. But as somebody who has wasted eons of his life looking at loading screens in streaming videos and video games, having to do so on Google Search is a deal-breaker for me. And when the results dont feel noticeably better than what I could get just by looking at what Google offered before, I dont think SGE is worth waiting for.

Read the original post:

Google's AI-powered search experience is way too slow - The Verge

Why Did the United States Invade Iraq? The Debate at 20 Years – Texas National Security Review

1 Melvyn Leffler, Confronting Saddam Hussein: George W. Bush and the Invasion of Iraq (New York: Oxford University Press, 2023); and Samuel Helfont, Iraq Against the World: Saddam, America, and the Post-Cold War Order (New York: Oxford University Press, 2023). See also Marjorie Gallelli, Its Been Twenty Years Time for Historians to Turn to Iraq, Passport 54, no. 1 (April 2023): 63, https://shafr.org/system/files/passport-04-2023-last-word.pdf.

2 Leffler, Confronting Saddam Hussein; Frederic Bozo, A History of the Iraq Crisis: France, the United States, and Iraq, 1991-2003 (Washington, DC: Woodrow Wilson Center Press, 2016); Alexandre Debs and Nuno Monteiro, Known Unknowns: Power Shifts, Uncertainty, and War, International Organization 68, no. 1 (Winter 2014): 131, https://www.jstor.org/stable/43282094; Ivo Daalder and James Lindsay, America Unbound: The Bush Revolution in Foreign Policy (Hoboken, NJ: Wiley, 2005); Peter Hahn, Missions Accomplished?: The United States and Iraq Since World War I (New York: Oxford University Press, 2012); Hakan Tunc, What Was It All About After All? The Causes of the Iraq War, Contemporary Security Policy 26, no. 5 (2005): 33555, https://doi.org/10.1080/12523260500190492; Steve Yetiv, The Iraq War of 2003: Why Did the United States Decide to Invade, in The Middle East and the United States: History, Politics, and Ideologies, 6th ed., ed. David Lesch and Mark Haas (New York: Routledge, 2018), 25374; and Ron Suskind, The One Percent Doctrine: Deep Inside Americas Pursuit of Its Enemies since 9/11 (New York: Simon & Schuster, 2006).

3 Ahsan Butt, Why Did the United States Invade Iraq in 2003? Security Studies 28, no. 2 (2019): 25085, https://doi.org/10.1080.09636412.2019.1551567; Andrew Bacevich, Americas War for the Greater Middle East: A Military History (New York: Random House, 2016); Jeffrey Record, Wanting War: Why the Bush Administration Invaded Iraq (Lincoln, NE: University of Nebraska Press, 2011); Frank Harvey, Explaining the Iraq War: Counterfactual Theory, Logic, and Evidence (New York: Cambridge University Press, 2012); John Mearsheimer and Stephen Walt, The Israel Lobby and U.S. Foreign Policy (New York: Farrar, Straus, and Giroux, 2007); Paul Pillar, Intelligence and U.S. Foreign Policy: Iraq, 9/11, and Misguided Reform (New York: Columbia University Press, 2011); Patrick Porter, Iraq: A Liberal War After All, International Politics 55, no. 2 (March 2018): 33448, https://link.springer.com/article/10.1057/s41311-017-0115-z; Lloyd Gardner, The Long Road to Baghdad: A History of U.S. Foreign Policy from the 1970s to the Present (New York: New Press, 2008); Gary Dorrien, Imperial Designs: Neoconservatism and the New Pax Americana (New York: Routledge, 2004); Stephen Kinzer, Overthrow: Americas Century of Regime Change from Hawaii to Iraq (New York: Times Books, 2006); Stephen Wertheim, Iraq and the Pathologies of Primacy, Foreign Affairs, May/June 2023, https://www.foreignaffairs.com/united-states/iraq-and-pathologies-primacy; Michael Desch, Americas Liberal Illiberalism: The Ideological Origins of Overreaction in U.S. Foreign Policy, International Security 32, no. 3 (Winter 2007/2008): 743, https://www.jstor.org/stable/30130517; Daniel Deudney and G. John Ikenberry, Realism, Liberalism, and the Iraq War, Survival 59, no. 4 (2017): 726, https://doi.org/10.1080/00396338.2017.1349757; Jane Cramer and Edward Duggan, In Pursuit of Primacy: Why the United States Invaded Iraq, in Why Did the United States Invade Iraq? ed. Jane Cramer and Trevor Thrall (New York: Routeledge, 2011), 20145.

4 Major primary source collections that scholars have drawn on to analyze U.S. decision-making on Iraq include the Digital National Security Archive, the Donald Rumsfeld Papers, U.S. Intelligence in the Middle East 1945-2009, and the British Iraq Inquiry, also known as the Chilcott Report.

5 Robert Jervis, Why Intelligence Fails: Lessons from the Iranian Revolution and the Iraq War (Ithaca, NY: Cornell University Press, 2010).

6 On Iraqi foreign policy and politics in this era, see Helfont, Iraq Against the World; Lisa Blaydes, State of Repression: Iraq Under Saddam Hussein (Princeton, NJ: Princeton University Press, 2018); David Woods et al., Iraqi Perspectives Project: A View of Operation Iraqi Freedom from Saddams Senior Leadership (Norfolk, VA: United States Joint Forces Command, 2006). On U.N. weapons inspections, see Malfrid Braut-Hegghammer, Cheaters Dilemma: Iraq, Weapons of Mass Destruction, and the Path to War, International Security 45, no. 1 (Summer 2020): 5189, https://doi.org/10.1162/isec_a_00382; Gregory Koblentz, Saddam Versus the Inspectors: The Impact of Regime Security on the Verification of Iraqs WMD Disarmament, Journal of Strategic Studies 41, no. 2 (April 2018): 372409, https://doi.org/10.1080/01402390.2016.1224764. On the role of U.S. allies and the United Nations on the road to war, see David Malone, The International Struggle Over Iraq: Politics in the U.N. Security Council, 1980-2005 (New York: Oxford University Press, 2006); and Alexander Thompson, Channels of Power: The U.N. Security Council and U.S. Statecraft in Iraq (Ithaca, NY: Cornell University Press, 2010).

7 For example, see Benjamin Miller, Explaining Changes in U.S. Grand Strategy: 9/11, the Rise of Offensive Liberalism, and the War in Iraq, Security Studies 19, no. 1 (2010): 2665, https://doi.org/10.1080/09636410903546426.

8 Leffler, Confronting Saddam, 248.

9 Robert Jervis, Explaining the War in Iraq, in Why Did the United States Invade Iraq? ed. Jane Cramer and Trevor Thrall (New York: Routeledge, 2011), 33.

10 Bozo, History of the Iraq Crisis, 9. See also Tunc, Causes of the Iraq War, 336.

11 Leffler, Confronting Saddam, 2840; Jervis, Explaining the War, 30; and Yetiv, Iraq War of 2003, 40001.

12 Leffler, Confronting Saddam, 5160; and Hahn, Missions Accomplished, 14243.

13 Leffler, Confronting Saddam, 91. See also Jervis, Explaining the War, 34; Debs and Monteiro, Known Unknowns, 34, 17; Tunc, Causes of the Iraq War, 339; Yetiv, Iraq War of 2003, 398408; and Philip Gordon and Jeremy Shapiro, Allies at War: America, Europe, and the Crisis Over Iraq (New York: McGraw Hill, 2004), 83

14 Leffler, Confronting Saddam, 15758; and Daalder and Lindsay, America Unbound, 12023.

15 George W. Bush, Decision Points (New York: Crown, 2010), 228. See also: Donald Rumsfeld, Known and Unknown: A Memoir (New York: Sentinel, 2011), 435; Douglas Feith, War and Decision: Inside the Pentagon at the Dawn of the War on Terrorism (New York: Harper Collins, 2008), 5152; and Richard Cheney, In My Time: A Personal and Political Memoir (New York: Threshold Editions, 2011), 369.

16 Jervis, Why Intelligence Fails, 23; and Leffler, Confronting Saddam, 85, 167.

17 Interview with Condoleezza Rice, CNN, Sept. 8, 2002, https://transcripts.cnn.com/show/le/date/2002-09-08/segment/00.

18 Daalder and Lindsay, America Unbound, 11628; and Yetiv, Iraq War of 2003, 40102.

19 Leffler, Confronting Saddam, 98. See also Jervis, Explaining the War, 30; and Debs and Monteiro, Known Unknowns, 26.

20 Tunc, Causes of the Iraq War, 342.

21 Leffler, Confronting Saddam, 98.

22 Leffler, Confronting Saddam, 252.

23 Leffler, Confronting Saddam, 252. See also Hahn, Missions Accomplished, 143.

24 Jervis, Explaining the War, 31, 34.

25 Melvyn Leffler, The Foreign Policies of the George W. Bush Administration: Memoirs, History, Legacy, Diplomatic History 37, no. 2 (April 2013): 190216, https://www.jstor.org/stable/44254516.

26 Condoleezza Rice, No Higher Honor: A Memoir of My Years in Washington (New York: Crown Publishers, 2011), 121.

27 Bush, Decision Points, 229.

28 Rumsfeld, Known Unknowns, 42224; Rice, No Higher Honor, 14749; and Feith, War and Decision, 6.

29 Bush, Decision Points, 223; Rice, No Higher Honor, 147; Feith, War and Decision, 181.

30 Joseph Stieb, Confronting the Iraq War: Melvyn Leffler, George Bush, and the Problem of Trusting Your Sources, War on the Rocks, Jan. 30, 2023, https://warontherocks.com/2023/01/confronting-the-iraq-war-melvyn-leffler-george-bush-and-the-problem-of-trusting-your-sources/.

31 Scholars in the realist-hegemony school include: Butt, Invade Iraq, 284; Wertheim, Pathologies of Primacy; Gardner, Long Road, 23; Dorrien, Imperial Designs, 18182; Deudney and Ikenberry, Realism, Liberalism, 89; Cramer and Duggan, Pursuit of Primacy, 20103; Noam Chomsky, Hegemony or Survival: Americas Quest for Global Dominance (New York: MacMillan, 2007), 1116; and Steven Hurst, The United States and Iraq Since 1979 (Edinburgh, UK: Edinburgh University Press, 2009), 1920.

32 Butt, Invade Iraq, 251.

33 Butt, Invade Iraq, 271.

34 Butt, Invade Iraq, 25758, 272.

35 Wertheim, Pathologies of Primacy.

36 Deudney and Ikenberry, Realism, Liberalism, 8.

37 Record, Wanting War, 2425. Record explicitly aligns his argument with the realist school of international relations.

38 James Bamford, A Pretext for War: 9/11, Iraq, and the Abuse of Americas Intelligence Agencies (New York: Doubleday, 2004); Wertheim, Pathologies of Power; Kinzer, Overthrow, 292; Gardner, Long Road, 4; and Cramer and Duggan, Pursuit of Primacy, 203.

39 Butt, Invade Iraq, 253; Harvey, Explaining the Iraq War, 140; Chomsky, Hegemony or Survival, 14041; Deudney and Ikenberry, Realism, Liberalism, 18; and Cramer and Duggan, Pursuit of Primacy, 23037.

40 Walt, Good Intentions, 13, 5464; Andrew Bacevich, The Age of Illusions: How America Squandered Its Cold War Victory (New York: Metropolitan Books, 2020) 11011; and Patrick Porter, The False Promise of Liberal Order (Medford, MA: Polity Press, 2020), 112.

41 Walt, Good Intentions, 2532; John Mearsheimer, Imperial by Design, National Interest, no. 11 (January/February 2011): 1619, https://www.jstor.org/stable/42897726; Bacevich, War for the Greater Middle East: A Military History (New York: Random House, 2016), 35863; Desch, Liberal Illiberalism, 79; and Miller, Offensive Liberalism, 3537.

42 Bacevich, Age of Illusions, 114; Record, Wanting War, 4952; and John Mearsheimer, The Great Delusion: Liberal Dreams and International Realities (New Haven, CT: Yale University Press, 2018), 15051.

43 On oil motives, see Michael Klare, Blood For Oil, in Iraq and Elsewhere, in Why Did the United States Invade Iraq? ed. Jane Cramer and Trevor Thrall (New York: Routledge, 2011), 129145; and Hurst, United States and Iraq, 29. On the Israeli alliance as a motive, see Mearsheimer and Walt, Israel Lobby, 25355. Michael MacDonald effectively rebuts the arguments that oil and Israel were core motives for the Iraq War in Overreach: Delusions of Regime Change in Iraq (Cambridge, MA: Harvard University Press, 2014), 2426.

44 Walt, Good Intentions, 76, 110; Porter, A Liberal War, 346; and MacDonald, Overreach, 36.

45 Porter, A Liberal War, 34042; Walt, Good Intentions, 6576; and Pillar, Intelligence and U.S. Foreign Policy, 2430, 5963.

46 The National Security Strategy of the United States of America, The White House, September 2002, introduction, https://2009-2017.state.gov/documents/organization/63562.pdf. For first-hand testimony of Bushs commitment to democracy in Iraq, see Natan Sharansky, The Case for Democracy: The Power of Freedom to Overcome Tyranny and Terror (New York: Public Affairs, 2004), 23944.

47 Porter, A Liberal War, 33942; Desch, Liberal Illiberalism, 2529; Eric Heinze, The New Utopianism: Liberalism, American Foreign Policy, and the War in Iraq, Journal of International Political Theory 4, no. 1 (April 2008): 11617, https://doi.org/10.3366/E1755088208000116.

48 George W. Bush, George Bushs Speech to the American Enterprise Institute, The Guardian, Feb. 27, 2003, https://www.theguardian.com/world/2003/feb/27/usa.iraq2.

49 MacDonald, Overreach, 3946.

50 Mearsheimer, Great Delusion, 154.

51 Bacevich, Age of Illusions, 11013; and Bacevich, Greater Middle East, 24043.

52 Porter, False Promise, 11213. For similar claims, see Pillar, Intelligence and U.S. Foreign Policy, 18; MacDonald, Overreach, 37; and Dorrien, Imperial Designs, 181.

53 Barton Gellman, Angler: The Cheney Vice Presidency (New York: Penguin, 2007), 232; Dorrien, Imperial Designs, 13; and Daalder and Lindsay, America Unbound, 1516.

54 Gardner, Long Road; and Bacevich, Greater Middle East.

55 Excerpts from 1992 Draft Defense Planning Guidance, Frontline, 1992, https://www.pbs.org/wgbh/pages/frontline/shows/iraq/etc/wolf.html.

56 Scholars who cite the 1992 Defense Planning Guidance include Bacevich, Greater Middle East, 362; Butt, Invade Iraq, 273; and Wertheim, Pathologies of Primacy.

57 Joseph Stieb, The Regime Change Consensus: Iraq in American Politics, 1990-2003 (Cambridge, UK: Cambridge University Press, 2021), 16061; and Project for a New American Century Statement of Principles, in The Iraq Papers, ed. John Ehrenberg et al. (New York: Oxford University Press, 2010), 1920.

58 Record, Wanting War, 15. See also Gardner, Long Road, 12630; Butt, Invade Iraq, 251; Dorrien, Imperial Designs, 18182; Bamford, Pretext for War, 423; and MacDonald, Overreach, 35.

59 George Tenet, At the Center of the Storm: My Years at the CIA (New York: Harper Collins, 2007), 30508, 322.

60 Scott McLellan, What Happened: Inside the Bush White House and Washingtons Culture of Deception (New York: Public Affairs, 2008), xiii. See also Richard Clarke: Against All Enemies: Inside Americas War on Terror (New York: Free Press, 2004), 3032.

61 Michael Mazarr, Leap of Faith: Hubris, Negligence, and Americas Greatest Foreign Policy Tragedy (New York: Public Affairs, 2019), 40607; Justin Vaisse, Neoconservatism: The Biography of a Movement (Cambridge, MA: Harvard University Press, 2011), 1417; and Robert Draper, To Start at War: How the Bush Administration Took America Into Iraq (New York: Penguin, 2021).

62 Stieb, Regime Change Consensus, 113.

63 Pillar, Intelligence and U.S. Foreign Policy, 41. See also Butt, Invade Iraq, 25557.

64 Pillar, Intelligence in U.S. Foreign Policy, 1342; Cramer and Thrall, Pursuit of Primacy, 20407; and Bamford, Pretext for War, 26970.

65 Desch, Liberal Illiberalism, 9.

66 Christian Alfonsi, Circle in the Sand: Why We Went Back to Iraq (New York: Doubleday, 2006); Samuel Helfont, The Gulf Wars Aftermath: Dilemmas, Missed Opportunities, and the Post-Cold War Order Undone, Texas National Security Review 4, no. 2 (Spring 2021): 2547, https://tnsr.org/2021/02/the-gulf-wars-afterlife-dilemmas-missed-opportunities-and-the-post-cold-war-order-undone/; and Stieb, The Regime Change Consensus, 411.

67 The Iraq Liberation Act of 1998, Public Law 338, 105th Cong., 2nd sess., Oct. 31, 1998.

68 Helfont, Iraq Against the World, 110.

69 For scholars who call the Iraq War a tragedy, see Mazarr, Leap of Faith, 11; and Leffler, Confronting Saddam Hussein, 252. For scholars who call it a blunder, see Stieb, Regime Change Consensus, 1; and Wertheim, Pathologies of Primacy.

70 Scholars who emphasize continuity include Gardner, Long Road, 2; and John Lewis Gaddis, Surprise, Security, and the American Experience (Cambridge, MA: Harvard University Press, 2004), 8091. Scholars who stress discontinuity include Daalder and Lindsay, America Unbound, 12223; and Andrew Bacevich, The Limits of Power: The End of American Exceptionalism (New York: MacMillan, 2008), 7475.

71 George W. Bush, Remarks at the United Nations General Assembly, White House Archives, Sept. 12, 2002, https://georgewbush-whitehouse.archives.gov/news/releases/2002/09/20020912-1.html.

72 Bush, Decision Points, 22930; and Leffler, Confronting Saddam, 110

73 Leffler, Confronting Saddam, 109.

74 Leffler, Confronting Saddam, 94, 120, 16064. See also Gordon and Shapiro, Allies at War, 9698.

75 Leffler, Confronting Saddam, 160.

76 Leffler, Confronting Saddam, 184, 191. See also Draper, Start a War, 181.

77 Harvey, Explaining the Iraq War, 7.

78 Debs and Monteiro, Known Unknowns, 34. See also Draper, Start a War, 181; Todd Purdum, A Time of Our Choosing: Americas War in Iraq (New York: Times Books, 2004), 4663; and Anthony Lake, Two Cheers for Bargaining Theory: Assessing Rationalist Explanations of the Iraq War, International Security 35, no. 3 (Winter 2010/2011): 752, https://www.jstor.org/stable/40981251.

79 Bush, Decision Points, 229.

80 Bush, Decision Points, 24445; Rumsfeld, Known and Unknown, 442; Rice, No Higher Honor, 181; and Feith, War and Decision, 223.

81 Rice, No Higher Honor, 18687.

82 Mazarr, Leap of Faith, 113. See also John Prados, The Iraq War-Part II: Was There Even a Decision? National Security Archive, Oct. 1, 2010, https://nsarchive2.gwu.edu/NSAEBB/NSAEBB328/index.htm; and Mark Danner, ed., The Secret Way to War: The Downing Street Memo and the Iraq Wars Buried History (New York: New York Review of Books, 2006).

83 Mazarr, Leap of Faith, 222.

84 Mazarr, Leap of Faith, 3, 21821; and Stieb, Regime Change Consensus, 21416.

85 Mazarr, Leap of Faith, 245-246; and Prados, Even a Decision?

86 Mazarr, Leap of Faith, 9.

87 Mazarr, Leap of Faith, 292.

88 Stieb, Regime Change Consensus, 23640.

89 Butt, Invade Iraq, 251.

90 Prados, Even a Decision?; and Richard Haass, War of Necessity, War of Choice: A Memoir of Two Iraq Wars (New York: Simon & Schuster, 2009), 213.

91 Mazarr, Leap of Faith, 238.

92 Prominent works that skip coercive diplomacy include Bacevich, Greater Middle East; and Mearsheimer and Walt, Israel Lobby; Record, Wanting War.

93 William Burns, How We Tried to Slow the Rush to War in Iraq, Politico, March 13, 2019, https://www.politico.com/magazine/story/2019/03/13/bill-burns-back-channel-book-excerpt-iraq-225731/.

94 Thanks to Theo Milonopoulos for this insight about future paths for Iraq scholarship.

95 Report of the Iraq Inquiry, House of Commons, July 6, 2016, https://www.gov.uk/government/publications/the-report-of-the-iraq-inquiry.

96 Leffler, Confronting Saddam, 10304.

97 Butt, Invade Iraq, 279.

98 Butt, Invade Iraq, 27980; and Mazarr, Leap of Faith, 153.

99 Patrick Porter, Blunder: Britains War in Iraq (New York: Oxford University Press, 2019), 25, 20.

100 Vaisse, Neoconservatism, 12, 221. Vaisse also calls neoconservatives democratic globalists.

Continue reading here:
Why Did the United States Invade Iraq? The Debate at 20 Years - Texas National Security Review