Archive for the ‘Artificial Intelligence’ Category

Never Give Artificial Intelligence the Nuclear Codes – The Atlantic

No technology since the atomic bomb has inspired the apocalyptic imagination like artificial intelligence. Ever since ChatGPT began exhibiting glints of logical reasoning in November, the internet has been awash in doomsday scenarios. Many are self-consciously fancifultheyre meant to jar us into envisioning how badly things could go wrong if an emerging intelligence comes to understand the world, and its own goals, even a little differently from how its human creators do. One scenario, however, requires less imagination, because the first steps toward it are arguably already being takenthe gradual integration of AI into the most destructive technologies we possess today.

Check out more from this issue and find your next story to read.

The worlds major military powers have begun a race to wire AI into warfare. For the moment, that mostly means giving algorithms control over individual weapons or drone swarms. No one is inviting AI to formulate grand strategy, or join a meeting of the Joint Chiefs of Staff. But the same seductive logic that accelerated the nuclear arms race could, over a period of years, propel AI up the chain of command. How fast depends, in part, on how fast the technology advances, and it appears to be advancing quickly. How far depends on our foresight as humans, and on our ability to act with collective restraint.

Jacquelyn Schneider, the director of the Wargaming and Crisis Simulation Initiative at Stanfords Hoover Institution, recently told me about a game she devised in 2018. It models a fast-unfolding nuclear conflict and has been played 115 times by the kinds of people whose responses are of supreme interest: former heads of state, foreign ministers, senior NATO officers. Because nuclear brinkmanship has thankfully been historically rare, Schneiders game gives us one of the clearest glimpses into the decisions that people might make in situations with the highest imaginable human stakes.

It goes something like this: The U.S. president and his Cabinet have just been hustled into the basement of the West Wing to receive a dire briefing. A territorial conflict has turned hot, and the enemy is mulling a nuclear first strike against the United States. The atmosphere in the Situation Room is charged. The hawks advise immediate preparations for a retaliatory strike, but the Cabinet soon learns of a disturbing wrinkle. The enemy has developed a new cyberweapon, and fresh intelligence suggests that it can penetrate the communication system that connects the president to his nuclear forces. Any launch commands that he sends may not reach the officers responsible for carrying them out.

There are no good options in this scenario. Some players delegate launch authority to officers at missile sites, who must make their own judgments about whether a nuclear counterstrike is warranteda scary proposition. But Schneider told me she was most unsettled by a different strategy, pursued with surprising regularity. In many games, she said, players who feared a total breakdown of command and control wanted to automate their nuclear launch capability completely. They advocated the empowerment of algorithms to determine when a nuclear counterstrike was appropriate. AI alone would decide whether to enter into a nuclear exchange.

Schneiders game is, by design, short and stressful. Players automation directives were not typically spelled out with an engineers precisionhow exactly would this be done? Could any automated system even be put in place before the culmination of the crisis?but the impulse is telling nonetheless. There is a wishful thinking about this technology, Schneider said, and my concern is that there will be this desire to use AI to decrease uncertainty by [leaders] who dont understand the uncertainty of the algorithms themselves.

AI offers an illusion of cool exactitude, especially in comparison to error-prone, potentially unstable humans. But todays most advanced AIs are black boxes; we dont entirely understand how they work. In complex, high-stakes adversarial situations, AIs notions about what constitutes winning may be impenetrable, if not altogether alien. At the deepest, most important level, an AI may not understand what Ronald Reagan and Mikhail Gorbachev meant when they said, A nuclear war cannot be won.

There is precedent, of course, for the automation of Armageddon. After the United States and the Soviet Union emerged as victors of the Second World War, they looked set to take up arms in a third, a fate they avoided only by building an infrastructure of mutual assured destruction. This system rests on an elegant and terrifying symmetry, but it goes wobbly each time either side makes a new technological advance. In the latter decades of the Cold War, Soviet leaders worried that their ability to counter an American nuclear strike on Moscow could be compromised, so they developed a dead hand program.

It was so simple, it barely qualified as algorithmic: Once activated during a nuclear crisis, if a command-and-control center outside Moscow stopped receiving communications from the Kremlin, a special machine would inquire into the atmospheric conditions above the capital. If it detected telltale blinding flashes and surges in radioactivity, all the remaining Soviet missiles would be launched at the United States. Russia is cagey about this system, but in 2011, the commander of the countrys Strategic Missile Forces said it still exists and is on combat duty. In 2018, a former leader of the missile forces said it has even been improved.

In 2019, Curtis McGiffin, an associate dean at the Air Force Institute of Technology, and Adam Lowther, then the director of research and education at the Louisiana Tech Research Institute, published an article arguing that America should develop its own nuclear dead hand. New technologies have shrunk the period of time between the moment an incoming attack is detected and the last moment that a president can order a retaliatory salvo. If this decision window shrinks any further, Americas counterstrike ability could be compromised. Their solution: Backstop Americas nuclear deterrent with an AI that can make launch decisions at the speed of computation.

From the September 2020 issue: Ross Andersen on Chinas artificial intelligence surveillance state

McGiffin and Lowther are right about the decision window. During the early Cold War, bomber planes like the one used over Hiroshima were the preferred mode of first strike. These planes took a long time to fly between the Soviet Union and the United States, and because they were piloted by human beings, they could be recalled. Americans built an arc of radar stations across the Canadian High Arctic, Greenland, and Iceland so that the president would have an hour or more of warning before the first mushroom cloud bloomed over an American city. Thats enough time to communicate with the Kremlin, enough time to try to shoot the bombers down, and, failing that, enough time to order a full-scale response.

The intercontinental ballistic missile (ICBM), first deployed by the Soviet Union in 1958, shortened that window, and within a decade, hundreds of them were slotted into the bedrock of North America and Eurasia. Any one of them can fly across the Northern Hemisphere in less than 30 minutes. To preserve as many of those minutes as possible, both superpowers sent up fleets of satellites that could spot the unique infrared signature of a missile launch in order to grok its precise parabolic path and target.

After nuclear-armed submarines were refined in the 70s, hundreds more missiles topped with warheads began to roam the worlds oceans, nearer to their targets, cutting the decision window in half, to 15 minutes or perhaps fewer. (Imagine one bobbing up along the Delaware coast, just 180 miles from the White House.) Even if the major nuclear powers never successfully develop new nuclear-missile technology, 15 minutes or fewer is frighteningly little time for a considered human response. But they are working to develop new missile technology, including hypersonic missiles, which Russia is already using in Ukraine to strike quickly and evade missile defenses. Both Russia and China want hypersonic missiles to eventually carry nuclear warheads. These technologies could potentially cut the window in half again.

These few remaining minutes would go quickly, especially if the Pentagon couldnt immediately conclude that a missile was headed for the White House. The president may need to be roused from sleep; launch codes could be fumbled. A decapitation strike could be completed with no retaliatory salvo yet ordered. Somewhere outside D.C., command and control would scramble to find the next civilian leader down the chain, as a more comprehensive volley of missiles rained down upon Americas missile silos, its military bases, and its major nodes of infrastructure.

A first strike of this sort would still be mad to attempt, because some American nuclear forces would most likely survive the first wave, especially submarines. But as we have learned again in recent years, reckless people sometimes lead nuclear powers. Even if the narrowing of the decision window makes decapitation attacks only marginally more tempting, countries may wish to backstop their deterrent with a dead hand.

The United States is not yet one of those countries. After McGiffin and Lowthers article was published, Lieutenant General John Shanahan, the director of the Pentagons Joint Artificial Intelligence Center, was asked about automation and nuclear weapons. Shanahan said that although he could think of no stronger proponent for AI in the military than himself, nuclear command and control is the one area I pause.

The Pentagon has otherwise been working fast to automate Americas war machine. As of 2021, according to a report that year, it had at least 685 ongoing AI projects, and since then it has continually sought increased AI funding. Not all of the projects are known, but a partial vision of Americas automated forces is coming into view. The tanks that lead U.S. ground forces in the future will scan for threats on their own so that operators can simply touch highlighted spots on a screen to wipe out potential attackers. In the F-16s that streak overhead, pilots will be joined in the cockpit by algorithms that handle complex dogfighting maneuvers. Pilots will be free to focus on firing weapons and coordinating with swarms of autonomous drones.

In January, the Pentagon updated its previously murky policy to clarify that it will allow the development of AI weapons that can make kill shots on their own. This capability alone raises significant moral questions, but even these AIs will be operating, essentially, as troops. The role of AI in battlefield command and the strategic functioning of the U.S. military is largely limited to intelligence algorithms, which simultaneously distill data streams gathered from hundreds of sensorsunderwater microphones, ground radar stations, spy satellites. AI wont be asked to control troop movements or launch coordinated attacks in the very near future. The pace and complexity of warfare may increase, however, in part because of AI weapons. If Americas generals find themselves overmatched by Chinese AIs that can comprehend dynamic, million-variable strategic situations for weeks on end, without so much as a napor if the Pentagon fears that could happenAIs might be placed in higher decision-making roles.

From the April 2019 issue: How AI will rewire us

The precise makeup of Americas nuclear command and control is classified, but AIs awesome processing powers are already being put to good use in the countrys early-alert systems. Even here, automation presents serious risks. In 1983, a Soviet early-alert system mistook glittering clouds above the Midwest for launched missiles. Catastrophe was averted only because Lieutenant Colonel Stanislav Petrova man for whom statues should be raisedfelt in his gut that it was a false alarm. Todays computer-vision algorithms are more sophisticated, but their workings are often mysterious. In 2018, AI researchers demonstrated that tiny perturbations in images of animals could fool neural networks into misclassifying a panda as a gibbon. If AIs encounter novel atmospheric phenomena that werent included in their training data, they may hallucinate incoming attacks.

But put hallucinations aside for a moment. As large language models continue to improve, they may eventually be asked to generate lucid text narratives of fast-unfolding crises in real time, up to and including nuclear crises. Once these narratives move beyond simple statements about the number and location of approaching missiles, they will become more like the statements of advisers, engaged in interpretation and persuasion. AIs may prove excellent advisersdispassionate, hyperinformed, always reliable. We should hope so, because even if they are never asked to recommend responses, their stylistic shadings would undoubtedly influence a president.

Given wide enough leeway over conventional warfare, an AI with no nuclear-weapons authority could nonetheless pursue a gambit that inadvertently escalates a conflict so far and so fast that a panicked nuclear launch follows. Or it could purposely engineer battlefield situations that lead to a launch, if it thinks the use of nuclear weapons would accomplish its assigned goals. An AI commander will be creative and unpredictable: A simple one designed by OpenAI beat human players at a modified version of Dota 2, a battle simulation game, with strategies that theyd never considered. (Notably, it proved willing to sacrifice its own fighters.)

These more far-flung scenarios are not imminent. AI is viewed with suspicion today, and if its expanding use leads to a stock-market crash or some other crisis, these possibilities will recede, at least for a time. But suppose that, after some early hiccups, AI instead performs well for a decade or several decades. With that track record, it could perhaps be allowed to operate nuclear command and control in a moment of crisis, as envisioned by Schneiders war-game participants. At some point, a president might preload command-and-control algorithms on his first day in office, perhaps even giving an AI license to improvise, based on its own impressions of an unfolding attack.

Much would depend on how an AI understands its goals in the context of a nuclear standoff. Researchers who have trained AI to play various games have repeatedly encountered a version of this problem: An AIs sense of what constitutes victory can be elusive. In some games, AIs have performed in a predictable manner until some small change in their environment caused them to suddenly shift their strategy. For instance, an AI was taught to play a game where players look for keys to unlock treasure chests and secure a reward. It did just that until the engineers tweaked the game environment, so that there were more keys than chests, after which it started hoarding all the keys, even though many were useless, and only sometimes trying to unlock the chests. Any innovations in nuclear weaponsor defensescould lead an AI to a similarly dramatic pivot.

Any country that inserts AI into its command and control will motivate others to follow suit, if only to maintain a credible deterrent. Michael Klare, a peace-and-world-security-studies professor at Hampshire College, has warned that if multiple countries automate launch decisions, there could be a flash war analogous to a Wall Street flash crash. Imagine that an American AI misinterprets acoustic surveillance of submarines in the South China Sea as movements presaging a nuclear attack. Its counterstrike preparations would be noticed by Chinas own AI, which would actually begin to ready its launch platforms, setting off a series of escalations that would culminate in a major nuclear exchange.

In the early 90s, during a moment of relative peace, George H. W. Bush and Mikhail Gorbachev realized that competitive weapons development would lead to endlessly proliferating nuclear warheads. To their great credit, they refused to submit to this arms-race dynamic. They instead signed the Strategic Arms Reduction Treaty, the first in an extraordinary sequence of agreements that shrank the two countries arsenals to less than a quarter of their previous size.

History has since resumed. Some of those treaties expired. Others were diluted as relations between the U.S. and Russia cooled. The two countries are now closer to outright war than they have been in generations. On February 21 of this year, less than 24 hours after President Joe Biden strolled the streets of Kyiv, Russian President Vladimir Putin said that his country would suspend its participation in New START, the last arsenal-limiting treaty that remains in effect. Meanwhile, China now likely has enough missiles to destroy every major American city, and its generals have reportedly grown fonder of their arsenal as they have seen the leverage that nuclear weapons have afforded Russia during the Ukraine war. Mutual assured destruction is now a three-body problem, and every party to it is pursuing technologies that could destabilize its logic.

From the October 2018 issue: Yuval Noah Harari on why technology favors tyranny

The next moment of relative peace could be a long way away, but if it comes again, we should draw inspiration from Bush and Gorbachev. Their disarmament treaties were ingenious because they represented a recovery of human agency, as would a global agreement to forever keep AI out of nuclear command and control. Some of the scenarios set forth here may sound quite distant, but thats more reason to think about how we can avoid them, before AI reels off an impressive run of battlefield successes and its use becomes too tempting.

A treaty can always be broken, and compliance with this one would be particularly difficult to verify, because AI development doesnt require conspicuous missile silos or uranium-enrichment facilities. But a treaty can help establish a strong taboo, and in this realm a strongly held taboo may be the best we can hope for. We cannot encrust the Earths surface with automated nuclear arsenals that put us one glitch away from apocalypse. If errors are to deliver us into nuclear war, let them be our errors. To cede the gravest of all decisions to the dynamics of technology would be the ultimate abdication of human choice.

This article appears in the June 2023 print edition.

Here is the original post:
Never Give Artificial Intelligence the Nuclear Codes - The Atlantic

ASCRS 2023: Artificial intelligence application to ophthalmology – Ophthalmology Times

Alvin Liu, MD, sat down with Sheryl Stevenson, Group Editorial Director,Ophthalmology Times, to discuss his presentation on deep learning and 3D OCT at the ASCRS annual meeting in San Diego.

Editors note:This transcript has been edited for clarity.

We're joined with Dr. Alvin Liu, who's going to be presenting at this year's ASCRS. Welcome to you. Tell us a little bit more about your presentation regarding deep learning and 3D OCT.

Sheryl, thank you so much for having me speak today. I'm happy to share results. So let me introduce myself a little bit more. My name is Alvin Liu. I'm a retina specialist at the Wilmer Eye Institute at Johns Hopkins University.

My research focuses on the artificial intelligence application to ophthalmology. And specifically, I'm also the director of the Wilmer Precision Ophthalmology Center of Excellence. So the work that I will be presenting at the ASCRS this year, is directly related to our center of excellence.

The overall premise is that we know macular degeneration is a leading cause of central vision loss in the elderly in the US and around the world. Specifically, most patients with AMD lose vision because of the wet form of AMD. Specifically for wet AMD we know that earlier, timely treatment with better presenting visual acuity predicts better final visual acuity. So it is imperative for us to figure out which patients are at a high risk of imminent conversion to wet AMD.

Currently there are ways for us to provide an average estimate of conversion or progression to advanced AMD using ERAS criteria. However, the ERAS criteria can only provide an average risk estimate for over five years. The model we have developed can be used as a tool that can provide information in a more reasonable or more meaningful timeframe, which is six months. We start out by asking ourselves, can we use deep learning, which is the cutting edge artificial intelligence technique for medical image analysis. Can we use deep learning to analyze OCT images to predict imminent conversion from dry to wet AMD within six months.

To do that, we collected a dataset of over 2500 patients with AMD and over 30,000 OCT images. We train a model that is able to produce robust prediction for when an eye is at a high risk of converting to wet AMD within six months using an OCT image alone. In addition, we ran different experiments, trying to see what if we also feed this model additional information in the form of how many obtainable clinical variables, such as the patient's age, sex, visual acuity, or fellow eye status, and we were able to demonstrate that in the prediction of imminent conversion to wet AMD. In the first eye of patients, meaning these are patients who had never converted to wet AMD in either eye, this additional tabular clinical information was also helpful.

Continued here:
ASCRS 2023: Artificial intelligence application to ophthalmology - Ophthalmology Times

UK and US intervene amid AI industrys rapid advances – The Guardian

Artificial intelligence (AI)

Competition and Markets Authority sends pre-warning to sector, while White House announces measures to address risks

The UK and US have intervened in the race to develop ever more powerful artificial intelligence technology, as the British competition watchdog launched a review of the sector and the White House advised tech firms of their fundamental responsibility to develop safe products.

Regulators are under mounting pressure to intervene, as the emergence of AI-powered language generators such as ChatGPT raises concerns about the potential spread of misinformation, a rise in fraud and the impact on the jobs market, with Elon Musk among nearly 30,000 signatories to a letter published last month urging a pause in significant projects.

The UK Competition and Markets Authority (CMA) said on Thursday it would look at the underlying systems or foundation models behind AI tools. The initial review, described by one legal expert as a pre-warning to the sector, will publish its findings in September.

On the same day, the US government announced measures to address the risks in AI development, as Kamala Harris, the vice-president, met chief executives at the forefront of the industrys rapid advances. In a statement, the White House said firms developing the technology had a fundamental responsibility to make sure their products are safe before they are deployed or made public.

The meeting capped a week during which a succession of scientists and business leaders issued warnings about the speed at which the technology could disrupt established industries. On Monday, Geoffrey Hinton, the godfather of AI, quit Google in order to speak more freely about the technologys dangers, while the UK governments outgoing scientific adviser, Sir Patrick Vallance, urged ministers to get ahead of the profound social and economic changes that could be triggered by AI, saying the impact on jobs could be as big as that of the Industrial Revolution.

Sarah Cardell said AI had the potential to transform the way businesses competed, but that consumers must be protected.

The CMA chief executive said: AI has burst into the public consciousness over the past few months but has been on our radar for some time. Its crucial that the potential benefits of this transformative technology are readily accessible to UK businesses and consumers while people remain protected from issues like false or misleading information.

ChatGPT and Googles rival Bard service are prone to delivering false information in response to users prompts, while concerns have been raised about AI-generated voice scams. The anti-misinformation outfit NewsGuard said this week that chatbots pretending to be journalists were running almost 50 AI-generated content farms. Last month, a song featuring fake AI-generated vocals purporting to be Drake and the Weeknd was pulled from streaming services.

The CMA review will look at how the markets for foundation models could evolve, what opportunities and risks there are for consumers and competition, and formulate guiding principles to support competition and protect consumers.

The leading players in AI are Microsoft, ChatGPT developer OpenAI in which Microsoft is an investor and Google parent Alphabet, which owns a world-leading AI business in UK-based DeepMind, while leading AI startups include Anthropic and Stability AI, the British company behind Stable Diffusion.

Alex Haffner, competition partner at the UK law firm Fladgate, said: Given the direction of regulatory travel at the moment and the fact the CMA is deciding to dedicate resource to this area, its announcement must be seen as some form of pre-warning about aggressive development of AI programmes without due scrutiny being applied.

In the US, Harris met the chief executives of OpenAI, Alphabet and Microsoft at the White House, and outlined measures to address the risks of unchecked AI development. In a statement following the meeting, Harris said she told the executives that the private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products.

The administration said it would invest $140m (111m) in seven new national AI research institutes, to pursue artificial intelligence advances that are ethical, trustworthy, responsible, and serve the public good. AI development is dominated by the private sector, with the tech industry producing 32 significant machine-learning models last year, compared with three produced by academia.

Leading AI developers have also agreed to their systems being publicly evaluated at this years Defcon 31 cybersecurity conference. Companies that have agreed to participate include OpenAI, Google, Microsoft and Stability AI.

This independent exercise will provide critical information to researchers and the public about the impacts of these models, said the White House.

Robert Weissman, the president of the consumer rights non-profit Public Citizen, praised the White Houses announcement as a useful step but said more aggressive action is needed. Weissman said this should including a moratorium on the deployment of new generative AI technologies, the term for tools such as ChatGPT and Stable Diffusion.

At this point, Big Tech companies need to be saved from themselves. The companies and their top AI developers are well aware of the risks posed by generative AI. But they are in a competitive arms race and each believes themselves unable to slow down, he said.

The EU was also told on Thursday that it must protect grassroots AI research or risk handing control of the technologys development to US firms.

In an open letter coordinated by the German research group Laion or Large-scale AI Open Network the European parliament was told that one-size-fits-all rules risked eliminating open research and development.

Rules that require a researcher or developer to monitor or control downstream use could make it impossible to release open-source AI in Europe, which would entrench large firms and hamper efforts to improve transparency, reduce competition, limit academic freedom, and drive investment in AI overseas, the letter said.

Europe cannot afford to lose AI sovereignty. Eliminating open-source R&D will leave the European scientific community and economy critically dependent on a handful of foreign and proprietary firms for essential AI infrastructure.

The largest AI efforts, by companies such as OpenAI and Google, are heavily controlled by their creators. It is impossible to download the model behind ChatGPT, for instance, and the paid-for access that OpenAI provides to customers comes with a number of legal and technical restrictions on how it can be used. By contrast, open-source efforts involve creating a model and then releasing it for anyone to use, improve or adapt as they see fit.

We are working on open-source AI because we think that sort of AI will be more safe, more accessible and more democratic, said Christoph Schuhmann, the organisational lead at Laion.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

See the original post here:
UK and US intervene amid AI industrys rapid advances - The Guardian

Did Stephen Hawking Warn Artificial Intelligence Could Spell the … – Snopes.com

Image Via Image Via Sion Touhig/Getty Images")}else if(is_tablet()) {document.write("")}

On May 1, 2023, the New York Post ran a story saying that British theoretical physicist Stephen Hawking had warned that the development of artificial intelligence (AI) could mean "the end of the human race."

Hawking, who died in 2018, had indeed said so in an interviewwith the BBC in 2014.

"The development of full artificial intelligence could spell the end of the human race," Hawking said during the interview. "Once humans develop artificial intelligence, it would take off on its own and re-design itself at an ever-increasing rate."

Another story, from CNBC in 2017, relayed a similar warning about AI from the physicist. It came from Hawking's speech at the Web Summit technology conference in Lisbon, Portugal, according to CNBC. Hawking reportedly said:

Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy.

Such warnings became more common in 2023. In March, tech leaders, scientists, and entrepreneurs warned about the dangers posed by AI creations, like ChatGPT, to humanity.

"AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs," they wrote in an open letter published by the Future of Life Institute, a nonprofit. The letter garnered over 27,500 signatures as of this writing in early May 2023. Among the signatories were CEO of SpaceX, Tesla, and Twitter Elon Musk, Apple co-founder Steve Wozniak, and Pinterest co-founder Evan Sharp.

In addition, Snopes and other fact-checking organizations noted a dramatic uptick in misinformation conveyed on social media via AI-generated contentin 2022 and 2023.

Then, on May 2, long-time researcher at Google, Geoffrey Hinton, quit the technology behemoth to sound the alarm about AI products. Hinton, known as "Godfather of AI," told MIT Technology Review that chatbots like GPT-4 that OpenAI, an AI lab "are on track to be a lot smarter than he thought they'd be."

Given that Hawking was indeed documented as warning about the potential for AI to "spell the end of the human race," we rate this quote as correctly attributed to him.

"Geoffrey Hinton Tells Us Why He's Now Scared of the Tech He Helped Build." MIT Technology Review, https://www.technologyreview.com/2023/05/02/1072528/geoffrey-hinton-google-why-scared-ai/. Accessed 3 May 2023.

"'Godfather of AI' Leaves Google, Warns of Tech's Dangers." AP NEWS, 2 May 2023, https://apnews.com/article/ai-godfather-google-geoffery-hinton-fa98c6a6fddab1d7c27560f6fcbad0ad.

"Pause Giant AI Experiments: An Open Letter." Future of Life Institute, https://futureoflife.org/open-letter/pause-giant-ai-experiments/. Accessed 3 May 2023.

Stephen Hawking Says AI Could Be "worst Event" in Civilization. 6 Nov. 2017, https://web.archive.org/web/20171106191334/https://www.cnbc.com/2017/11/06/stephen-hawking-ai-could-be-worst-event-in-civilization.html.

Stephen Hawking Warned AI Could Mean the "End of the Human Race." 3 May 2023, https://web.archive.org/web/20230503162420/https://nypost.com/2023/05/01/stephen-hawking-warned-ai-could-mean-the-end-of-the-human-race/.

"Stephen Hawking Warns Artificial Intelligence Could End Mankind." BBC News, 2 Dec. 2014. http://www.bbc.com, https://www.bbc.com/news/technology-30290540.

Damakant Jayshi is a fact-checker for Snopes, based in Atlanta.

Read this article:
Did Stephen Hawking Warn Artificial Intelligence Could Spell the ... - Snopes.com

Artificial Intelligence and Jobs: Whos at Risk – Barron’s

Since the release of ChatGPT, companies have scrambled to understand how generative artificial intelligence will affect jobs. This past week, IBM CEO Arvind Krishna said the company will pause hiring for roles that could be replaced by AIaffecting as much as 30% of back-office jobs over five years. And Chegg , which provides homework help and online tutoring, saw its stock lose half of its value after warning of slower growth as students turned to ChatGPT.

A recent study by a team of professors from Princeton University, the University of Pennsylvania, and New York University analyzed how generative AI relates to 52 human abilities. The researchers then calculated AI exposure for occupations. (Exposure doesnt necessarily mean job loss.) Among high-exposure jobs, a few are obvioustelemarketers, HR specialists, loan officers, and law clerks. More surprising: Eight of the top 10 are humanities professors.

In a survey from customer-service software firm Tidio, 64% of respondents thought chatbots, robots, or AI can replace teachers, though many believe that empathy and listening skills may be tough to replicate. A survey from the Walton Family Foundation found that within two months of ChatGPTs introduction, 51% of teachers tapped it for lesson planning and creative ideas. Some 40% said they used it at least once a week, compared with 22% of students.

AI isnt just knocking on the door; its already inside. Language-learning app Duolingo has been using AI since 2020. Even Chegg unveiled an AI learning service called CheggMate using OpenAIs GPT-4. Still, Morgan Stanley analyst Josh Baer wrote that its highly unlikely that CheggMate can insulate the company from AI.

Write to Evie Liu at evie.liu@barrons.com

Advertisement - Scroll to Continue

Devon Energy , KKR , McKesson , PayPal Holdings , and Tyson Foods release earnings.

Airbnb , Air Products & Chemicals , Apollo Global Management , Duke Energy , Electronic Arts , Occidental Petroleum , and TransDigm Group report quarterly results.

The National Federation of Independent Business releases its Small Business Optimism Index for April. Consensus estimate is for a 90 reading, roughly even with the March figure. The index has had 15 consecutive readings below the 49-year average of 98 as inflation and a tight labor market remain top of mind for small-business owners.

Walt Disney

Advertisement - Scroll to Continue

Brookfield Asset Management , Roblox , Toyota Motor , and Trade Desk release earnings.

The Bureau of Labor Statistics releases the consumer price index for April. Economists forecast a 5% year-over-year increase, matching the March data. The core CPI, which excludes volatile food and energy prices, is expected to rise 5.4%, two-tenths of a percentage point less than previously. Both indexes are well below their peaks from last year but also much higher than the Federal Reserves 2% target.

Honda Motor , JD.com , PerkinElmer , and Tapestry hold conference calls to discuss quarterly results.

Advertisement - Scroll to Continue

The Bank of England announces its monetary-policy decision. The central bank is widely expected to raise its bank rate by a quarter of a percentage point, to 4.5%. The United Kingdoms CPI rose 10.1% in March from the year prior, making it the only Western European country with a double-digit rate of inflation.

Advertisement - Scroll to Continue

The Department of Labor reports initial jobless claims for the week ending on May 6. Claims averaged 239,250 in April, returning to historical averages after a prolonged period of being below trend, signaling a loosening of a very tight labor market.

The BLS releases the producer price index for April. The consensus call is for the PPI to increase 2.4% and the core PPI to rise 3.3%. This compares with gains of 2.7% and 3.4%, respectively, in March. The PPI and core PPI are at their lowest levels in about two years.

The University of Michigan releases its Consumer Sentiment Index for May. Economists forecast a dour 62.6 reading, about one point lower than in April. Consumers year-ahead inflation expectations surprisingly jumped by a percentage point in April to 4.6%.

The rest is here:
Artificial Intelligence and Jobs: Whos at Risk - Barron's