New study: Countless AI experts doesnt know what to think on AI risk – Vox.com
In 2016, researchers at AI Impacts, a project that aims to improve understanding of advanced AI development, released a survey of machine learning researchers. They were asked when they expected the development of AI systems that are comparable to humans along many dimensions, as well as whether to expect good or bad results from such an achievement.
The headline finding: The median respondent gave a 5 percent chance of human-level AI leading to outcomes that were extremely bad, e.g. human extinction. That means half of researchers gave a higher estimate than 5 percent saying they considered it overwhelmingly likely that powerful AI would lead to human extinction and half gave a lower one. (The other half, obviously, believed the chance was negligible.)
If true, that would be unprecedented. In what other field do moderate, middle-of-the-road researchers claim that the development of a more powerful technology one they are directly working on has a 5 percent chance of ending human life on Earth forever?
Each week, we explore unique solutions to some of the world's biggest problems.
In 2016 before ChatGPT and AlphaFold the result seemed much likelier to be a fluke than anything else. But in the eight years since then, as AI systems have gone from nearly useless to inconveniently good at writing college-level essays, and as companies have poured billions of dollars into efforts to build a true superintelligent AI system, what once seemed like a far-fetched possibility now seems to be on the horizon.
So when AI Impacts released their follow-up survey this week, the headline result that between 37.8% and 51.4% of respondents gave at least a 10% chance to advanced AI leading to outcomes as bad as human extinction didnt strike me as a fluke or a surveying error. Its probably an accurate reflection of where the field is at.
Their results challenge many of the prevailing narratives about AI extinction risk. The researchers surveyed dont subdivide neatly into doomsaying pessimists and insistent optimists. Many people, the survey found, who have high probabilities of bad outcomes also have high probabilities of good outcomes. And human extinction does seem to be a possibility that the majority of researchers take seriously: 57.8 percent of respondents said they thought extremely bad outcomes such as human extinction were at least 5 percent likely.
This visually striking figure from the paper shows how respondents think about what to expect if high-level machine intelligence is developed: Most consider both extremely good outcomes and extremely bad outcomes probable.
As for what to do about it, there experts seem to disagree even more than they do about whether theres a problem in the first place.
The 2016 AI impacts survey was immediately controversial. In 2016, barely anyone was talking about the risk of catastrophe from powerful AI. Could it really be that mainstream researchers rated it plausible? Had the researchers conducting the survey who were themselves concerned about human extinction resulting from artificial intelligence biased their results somehow?
The survey authors had systematically reached out to all researchers who published at the 2015 NIPS and ICML conferences (two of the premier venues for peer-reviewed research in machine learning, and managed to get responses from roughly a fifth of them. They asked a wide range of questions about progress in machine learning and got a wide range of answers: Really, aside from the eye-popping human extinction answers, the most notable result was how much ML experts disagreed with one another. (Which is hardly unusual in the sciences.)
But one could reasonably be skeptical. Maybe there were experts who simply hadnt thought very hard about their human extinction answer. And maybe the people who were most optimistic about AI hadnt bothered to answer the survey.
When AI Impacts reran the survey in 2022, again contacting thousands of researchers who published at top machine learning conferences, their results were about the same. The median probability of an extremely bad, e.g., human extinction outcome was 5 percent.
That median obscures some fierce disagreement. In fact, 48 percent of respondents gave at least a 10 percent chance of an extremely bad outcome, while 25 percent gave a 0 percent chance. Responding to criticism of the 2016 survey, the team asked for more detail: how likely did respondents think it was that AI would lead to human extinction or similarly permanent and severe disempowerment of the human species? Depending on how they asked the question, this got results between 5 percent and 10 percent.
In 2023, in order to reduce and measure the impact of framing effects (different answers based on how the question is phrased), many of the key questions on the survey were asked of different respondents with different framings. But again, the answers to the question about human extinction were broadly consistent in the 5-10 percent range no matter how the question was asked.
The fact the 2022 and 2023 surveys found results so similar to the 2016 result makes it hard to believe that the 2016 result was a fluke. And while in 2016 critics could correctly complain that most ML researchers had not seriously considered the issue of existential risk, by 2023 the question of whether powerful AI systems will kill us all had gone mainstream. Its hard to imagine that many peer-reviewed machine learning researchers were answering a question theyd never considered before.
I think the most reasonable reading of this survey is that ML researchers, like the rest of us, are radically unsure about whether to expect the development of powerful AI systems to be an amazing thing for the world or a catastrophic one.
Nor do they agree on what to do about it. Responses varied enormously on questions about whether slowing down AI would make good outcomes for humanity more likely. While a large majority of respondents wanted more resources and attention to go into AI safety research, many of the same respondents didnt think that working on AI alignment was unusually valuable compared to working on other open problems in machine learning.
In a situation with lots of uncertainty like about the consequences of a technology like superintelligent AI, which doesnt yet exist theres a natural tendency to want to look to experts for answers. Thats reasonable. But in a case like AI, its important to keep in mind that even the most well-regarded machine learning researchers disagree with one another and are radically uncertain about where all of us are headed.
A version of this story originally appeared in the Future Perfect newsletter. Sign up here!
Yes, I'll give $5/month
Yes, I'll give $5/month
We accept credit card, Apple Pay, and Google Pay. You can also contribute via
Read more here:
New study: Countless AI experts doesnt know what to think on AI risk - Vox.com
- What Apples AI deal with Google means for the two tech giants, and for $500 billion upstart OpenAI - Fortune - January 14th, 2026 [January 14th, 2026]
- Whats Expensive in AI? The Answer is Changing Fast. - SaaStr - January 14th, 2026 [January 14th, 2026]
- Four Ways I Use AI as a Principal (and One Way I Never Will) (Opinion) - Education Week - January 14th, 2026 [January 14th, 2026]
- Pentagon rolls out major reforms of R&D, AI - Breaking Defense - January 14th, 2026 [January 14th, 2026]
- Pentagon task force to deploy AI-powered UAS systems to capture drones - Defense News - January 14th, 2026 [January 14th, 2026]
- Buy These 3 AI ETFs Now: They Could Be Worth $15 Million in 30 Years - The Motley Fool - January 14th, 2026 [January 14th, 2026]
- ServiceNow Patches Critical AI Platform Flaw Allowing Unauthenticated User Impersonation - The Hacker News - January 14th, 2026 [January 14th, 2026]
- Partnering with Sandstone: An AI-Native Platform for In-House Legal Teams - Sequoia Capital - January 14th, 2026 [January 14th, 2026]
- Bandcamps Mission and Our Approach to Generative AI - Bandcamp - January 14th, 2026 [January 14th, 2026]
- Mom of one of Elon Musk's kids says AI chatbot Grok generated sexual deepfake images of her: "Make it stop" - CBS News - January 14th, 2026 [January 14th, 2026]
- Bill Gates Says 'AI Will Change Society the Most'Job Disruption Has Already Begun, 'Less Labor' Will Be Needed, And 5-Day Work Week May Disappear -... - January 14th, 2026 [January 14th, 2026]
- Prediction: This Artificial Intelligence (AI) Chip Stock Will Outperform Nvidia in 2026 (Hint: It's Not AMD) - The Motley Fool - January 14th, 2026 [January 14th, 2026]
- Microsoft responds to AI data center revolt, vowing to cover full power costs and reject local tax breaks - GeekWire - January 14th, 2026 [January 14th, 2026]
- War Department 'SWAT Team' Removes Barriers to Efficient AI Development - U.S. Department of War (.gov) - January 14th, 2026 [January 14th, 2026]
- South Koreas Revised AI Basic Act to Take Effect January 22 With New Oversight, Watermarking Rules - BABL AI - January 14th, 2026 [January 14th, 2026]
- Musks AI tool Grok will be integrated into Pentagon networks, Hegseth says - The Guardian - January 14th, 2026 [January 14th, 2026]
- You cant afford not to use it: Inderpal Bhandari speaks about the future of AI in sports - The Daily Northwestern - January 14th, 2026 [January 14th, 2026]
- How AI image tools can be tricked into making political propaganda - Help Net Security - January 14th, 2026 [January 14th, 2026]
- Mesa County to test AI software for housing development reviews - KKCO 11 News - January 14th, 2026 [January 14th, 2026]
- 'Most Severe AI Vulnerability to Date' Hits ServiceNow - Dark Reading | Security - January 14th, 2026 [January 14th, 2026]
- Self-learning AI generates NFL picks, score predictions for every 2026 divisional round matchup - CBS Sports - January 14th, 2026 [January 14th, 2026]
- Gen AI Is Threatening the Platforms That Dominate Online Travel - Harvard Business Review - January 14th, 2026 [January 14th, 2026]
- NVIDIA and Lilly Announce Co-Innovation AI Lab to Reinvent Drug Discovery In the Age of AI - Eli Lilly - January 14th, 2026 [January 14th, 2026]
- AI Fraud Has Exploded. This Background-Check Startup Is Cashing In. - Forbes - January 14th, 2026 [January 14th, 2026]
- Caterpillar Briefly Tops $300 Billion Valuation on AI Rally - Yahoo Finance - January 14th, 2026 [January 14th, 2026]
- Google has the best AI for enterprise right now, Ray Wang - Fox Business - January 14th, 2026 [January 14th, 2026]
- What AI is actually good for, according to developers - The GitHub Blog - January 14th, 2026 [January 14th, 2026]
- Apple and Google are teaming up on AI. What it means for both stocks - CNBC - January 14th, 2026 [January 14th, 2026]
- A Look At Cisco Systems (CSCO) Valuation As AI And Cybersecurity Expansion Gain Traction - simplywall.st - January 14th, 2026 [January 14th, 2026]
- US allows Nvidia to send advanced AI chips to China with restrictions - Yahoo Finance - January 14th, 2026 [January 14th, 2026]
- AI industry insiders launch site to poison the data that feeds them - theregister.com - January 11th, 2026 [January 11th, 2026]
- The agentic commerce platform: Shopify connects any merchant to every AI conversation - Shopify - January 11th, 2026 [January 11th, 2026]
- Google teams up with Walmart and other retailers to enable shopping within Gemini AI chatbot - AP News - January 11th, 2026 [January 11th, 2026]
- This CEO laid off nearly 80% of his staff because they refused to adopt AI fast enough. 2 years later, he says hed do it again - Yahoo Finance - January 11th, 2026 [January 11th, 2026]
- Bridging the gap between AI and medicine: Claude in Microsoft Foundry advances capabilities for healthcare and life sciences customers - Microsoft - January 11th, 2026 [January 11th, 2026]
- Artificial Intelligence (AI) Is Driving a New Wave of Infrastructure Spending. This Stock Is Key. - Yahoo Finance - January 11th, 2026 [January 11th, 2026]
- Job Seekers Find a New Source of Income: Training AI to Do Their Old Roles - The Wall Street Journal - January 11th, 2026 [January 11th, 2026]
- The AI platform shift and the opportunity ahead for retail - blog.google - January 11th, 2026 [January 11th, 2026]
- Applied Digital Just Solved AI's Biggest Bottleneck with Technology From the 1800s - The Motley Fool - January 11th, 2026 [January 11th, 2026]
- Can Agentic AI reduce the burden of compliance? - Security Boulevard - January 11th, 2026 [January 11th, 2026]
- Americas AI Boom Is Running Into An Unplanned Water Problem - Forbes - January 11th, 2026 [January 11th, 2026]
- AI, edge, and security: Shaping the need for modern infrastructure management - Network World - January 11th, 2026 [January 11th, 2026]
- Your next primary care doctor could be online only, accessed through an AI tool : Shots - Health News - NPR - January 11th, 2026 [January 11th, 2026]
- Brad Gerstner breaks from the crowd on one AI stock - thestreet.com - January 11th, 2026 [January 11th, 2026]
- Are we in an AI bubble? What 40 tech leaders and analysts are saying, in one chart - CNBC - January 11th, 2026 [January 11th, 2026]
- AI is intensifying a 'collapse' of trust online, experts say - NBC News - January 11th, 2026 [January 11th, 2026]
- Anthropic follows OpenAI in rolling out healthcare AI tools - Investing.com - January 11th, 2026 [January 11th, 2026]
- Behind Anthropic's stunning growth is a sibling team that may hold the key to generative AI - CNBC - January 11th, 2026 [January 11th, 2026]
- Fears of an AI bubble were nowhere to be found at the worlds biggest tech show - CNN - January 11th, 2026 [January 11th, 2026]
- 'No one verified the evidence': Woman says AI-generated deepfake text sent her to jail | Action News Investigation - 6abc Philadelphia - January 11th, 2026 [January 11th, 2026]
- Global AI adoption rose in 2025 but regional gaps widened | ETIH EdTech News - EdTech Innovation Hub - January 11th, 2026 [January 11th, 2026]
- AI isn't making us smarter it's training us to think backward, an innovation theorist says - Business Insider - January 11th, 2026 [January 11th, 2026]
- The "Safest" Trillion-Dollar Artificial Intelligence (AI) Stock to Invest $50,000 In Right Now - The Motley Fool - January 11th, 2026 [January 11th, 2026]
- Prediction: These 2 AI Stocks Will Be Worth More Than Palantir by the End of 2026 - The Motley Fool - January 11th, 2026 [January 11th, 2026]
- A new AI tool could dramatically speed up the discovery of life-saving medicines - Phys.org - January 11th, 2026 [January 11th, 2026]
- After 7 years at McKinsey, I left to build an AI healthtech startup. I had to unlearn the pursuit of perfection. - Business Insider - January 11th, 2026 [January 11th, 2026]
- I never expected an AI app to replace my keyboard, but I was wrong - Android Authority - January 11th, 2026 [January 11th, 2026]
- YouHodler CEO Ilya Volkovs 2026 Crypto Predictions for Stablecoins, AI, Lightning, Bitcoin and More - Crowdfund Insider - January 11th, 2026 [January 11th, 2026]
- I asked AI to beat the S&P 500 it gave me this strategy - MSN - January 11th, 2026 [January 11th, 2026]
- Ant International Partners with Googles Universal Commerce Protocol to Expand AI Capabilities - Business Wire - January 11th, 2026 [January 11th, 2026]
- CES 2026: Follow live for the best, weirdest, most interesting tech as this robot and AI-heavy event wraps up - TechCrunch - January 9th, 2026 [January 9th, 2026]
- Physical AI dominates CES but humanity will still have to wait a while for humanoid servants - Reuters - January 9th, 2026 [January 9th, 2026]
- OpenAI and SoftBank announce $1 billion investment in SB Energy as part of massive AI buildout - CNBC - January 9th, 2026 [January 9th, 2026]
- DeepSeek To Release Next Flagship AI Model With Strong Coding Ability - The Information - January 9th, 2026 [January 9th, 2026]
- AI on Campus: Rethinking the Core Goals of Higher Education - GovTech - January 9th, 2026 [January 9th, 2026]
- 3 Brilliant AI Stocks That Could Double in 2026 - The Motley Fool - January 9th, 2026 [January 9th, 2026]
- DeepSeek to launch new AI model focused on coding in February, The Information reports - Reuters - January 9th, 2026 [January 9th, 2026]
- Marsha Blackburn: My convictions on AI have been clear, not all over - Chattanooga Times Free Press - January 9th, 2026 [January 9th, 2026]
- OpenAI Is Asking Contractors to Upload Work From Past Jobs to Evaluate the Performance of AI Agents - WIRED - January 9th, 2026 [January 9th, 2026]
- AI Hallucinations, Sanctions, and Context: What a Florida Disciplinary Case Really Teaches - JD Supra - January 9th, 2026 [January 9th, 2026]
- Californias budget projections rely heavily on a continued AI boom - capradio.org - January 9th, 2026 [January 9th, 2026]
- No 10 condemns insulting move by X to restrict Grok AI image tool - The Guardian - January 9th, 2026 [January 9th, 2026]
- How to regulate AI by learning from the United States - omnesmag.com - January 9th, 2026 [January 9th, 2026]
- The Change You Need Is the Change You Make. SaaStr AI Annual 2026 Will Help You Get There. - SaaStr - January 9th, 2026 [January 9th, 2026]
- Reimagining Merchandising in the Era of Agentic AI - Bain & Company - January 9th, 2026 [January 9th, 2026]
- Bill Gates says AI could be used as a bioterrorism weapon akin to the COVID pandemic if it falls into the wrong hands - Fortune - January 9th, 2026 [January 9th, 2026]
- So are we in an AI bubble? Here are clues to look for. - NCPR: North Country Public Radio - January 9th, 2026 [January 9th, 2026]
- #679: Why AI Taking Your Job Isnt the Real Problem, with Fmr. OpenAI Exec Zack Kass - Afford Anything - January 9th, 2026 [January 9th, 2026]
- Idaho Statesman AI falsely said a brewery closed, hurting business. Union calls for guardrails on the tech. - BoiseDev - January 9th, 2026 [January 9th, 2026]
- What effect will AI have on the radiologist workforce? - AuntMinnie - January 9th, 2026 [January 9th, 2026]