MIT Professor Compares Ignoring AGI to Don’t Look Up – Futurism
MIT professor and AI researcher Max Tegmark is pretty stressed out about the potential impact of artificial general intelligence (AGI) on human society. In anew essay for Time, he rings the alarm bells, painting a pretty dire picture of a future determined by an AI that can outsmart us.
"Sadly, I now feel that we're living the movie 'Don't Look Up' for another existential threat: unaligned superintelligence," Tegmark wrote, comparing what he perceives to be a lackadaisical response to a growing AGI threat to director Adam McKay's popular climate change satire.
For those who haven't seen it, "Don't Look Up" is a fictional story about a team of astronomers who, after discovering that a species-destroying asteroid is hurtling towards Earth, set out to warn the rest of human society. But to their surprise and frustration, a massive chunk of humanity doesn't care.
The asteroid is one big metaphor for climate change. But Tegmark thinks that the story can apply to the risk of AGI as well.
"A recent survey showed that half of AI researchers give AI at least ten percent chance of causing human extinction," the researcher continued. "Since we have such a long history of thinking about this threat and what to do about it, from scientific conferences to Hollywood blockbusters, you might expect that humanity would shift into high gear with a mission to steer AI in a safer direction than out-of-control superintelligence."
"Think again," he added, "instead, the most influential responses have been a combination of denial, mockery, and resignation so darkly comical that it's deserving of an Oscar."
In short, according to Tegmark, AGI is a very real threat, and human society isn't doing nearly enough to stop it or, at the very least, isn't ensuring that AGI will be properly aligned with human values and safety.
And just like in McKay's film, humanity has two choices: begin to make serious moves to counter the threat or, if things go the way of the film, watch our species perish.
Tegmark's claim is pretty provocative, especially considering that a lot of experts out there either don't agreethat AGI will ever actually materialize, or argue that it'll take a very long time to get there, if ever. Tegmark does address this disconnect in his essay, although his argument arguably isn't the most convincing.
"I'm often told that AGI and superintelligence won't happen because its impossible: human-level Intelligence is something mysterious that can only exist in brains," Tegmark writes. "Such carbon chauvinism ignores a core insight from the AI revolution: that intelligence is all about information processing, and it doesnt matter whether the information is processed by carbon atoms in brains or by silicon atoms in computers."
Tegmark goes as far as to claim that superintelligence "isn't a long-term issue," but is even "more short-term than e.g. climate change and most people's retirement planning." To support his theory, the researcher pointed to a recent Microsoft study arguing that OpenAI's large language model GPT-4 is already showing "sparks" of AGI and a recent talk given by deep learning researcher Yoshua Bengio.
While the Microsoft study isn't peer-reviewed and arguably reads more like marketing material, Bengio's warning is much more compelling. His call to action is much more grounded in what we don't know about the machine learning programs that already exist, as opposed to making big claims about tech that does not yet exist.
To that end, the current crop of less sophisticated AIs already poses a threat, from misinformation-spreading synthetic content to the threat of AI-powered weaponry.
And the industry at large, as Tegmark further notes, hasn't exactly done an amazing job so far at ensuring a slow and safe development, arguing that we shouldn't have taught it how to code, connect it to the internet, or give it a public API.
Ultimately, if and when AGI might come to fruition is still unclear.
While there's certainly a financial incentive for the field to keep moving quickly, a lot of experts agree that we should slow down the development of more advanced AIs, regardless of whether AGI is around the corner or still lightyears away.
And in the meantime, Tegmark argues that we should agree there's a very real threat in front of us before it's too late.
"Although humanity is racing toward a cliff, we're not there yet, and there's still time for us to slow down, change course and avoid falling off and instead enjoying the amazing benefits that safe, aligned AI has to offer," Tegmark writes. "This requires agreeing that the cliff actually exists and falling off of it benefits nobody."
"Just look up!" he added.
More on AI: Elon Musk Says He's Building a "Maximum Truth-Seeking AI"
See the rest here:
MIT Professor Compares Ignoring AGI to Don't Look Up - Futurism
- OpenAI CFO Sarah Friar on the race to build artificial general intelligence - Goldman Sachs - April 16th, 2025 [April 16th, 2025]
- Artificial General Intelligence (AGI) Progress & The Road to ASI - Crowe - April 16th, 2025 [April 16th, 2025]
- What is artificial general intelligence and how does it differ from other types of AI? - Tech Xplore - April 5th, 2025 [April 5th, 2025]
- DeepMind predicts arrival of artificial general intelligence by 2030, warns of potential existential threat to humanity - BizzBuzz - April 5th, 2025 [April 5th, 2025]
- Stop the World: The road to artificial general intelligence, with Helen Toner - | Australian Strategic Policy Institute | ASPI - April 5th, 2025 [April 5th, 2025]
- Artificial General Intelligence: The Next Frontier in AI - The Villager Newspaper - April 3rd, 2025 [April 3rd, 2025]
- Prominent transhumanist on Artificial General Intelligence: We must stop everything. We are not ready. - All Israel News - March 22nd, 2025 [March 22nd, 2025]
- Researchers want to give some common sense to AI to turn it into artificial general intelligence - MSN - March 22nd, 2025 [March 22nd, 2025]
- The AI Obsession: Why Chasing Artificial General Intelligence is a Misguided Dream - Macnifico.pt - March 18th, 2025 [March 18th, 2025]
- Navigating artificial general intelligence development: societal, technological, ethical, and brain-inspired pathways - Nature.com - March 13th, 2025 [March 13th, 2025]
- We meet the protesters who want to ban Artificial General Intelligence before it even exists - The Register - February 23rd, 2025 [February 23rd, 2025]
- How Artificial General Intelligence (AGI) is likely to transform manufacturing in the next 10 years - Wire19 - February 11th, 2025 [February 11th, 2025]
- How Artificial General Intelligence is likely to transform manufacturing in the next 10 years - ET Manufacturing - February 11th, 2025 [February 11th, 2025]
- How Do You Get to Artificial General Intelligence? Think Lighter - WIRED - November 28th, 2024 [November 28th, 2024]
- How much time do we have before Artificial General Intelligence (AGI) to turns into Artificial Self-preserving - The Times of India - November 5th, 2024 [November 5th, 2024]
- Simuli to Leap Forward in the Trek to Artificial General Intelligence through 2027 Hyperdimensional AI Ecosystem - USA TODAY - November 5th, 2024 [November 5th, 2024]
- Implications of Artificial General Intelligence on National and International Security - Yoshua Bengio - - October 31st, 2024 [October 31st, 2024]
- James Cameron says the reality of artificial general intelligence is 'scarier' than the fiction of it - Business Insider - October 31st, 2024 [October 31st, 2024]
- James Cameron says the reality of artificial general intelligence is 'scarier' than the fiction of it - MSN - October 31st, 2024 [October 31st, 2024]
- Bot fresh hell is this?: Inside the rise of Artificial General Intelligence or AGI - MSN - October 31st, 2024 [October 31st, 2024]
- Artificial General Intelligence (AGI) Market to Reach $26.9 Billion by 2031 As Revealed In New Report - WhaTech - September 26th, 2024 [September 26th, 2024]
- 19 jobs artificial general intelligence (AGI) may replace and 10 jobs it could create - MSN - September 26th, 2024 [September 26th, 2024]
- Paige Appoints New Leadership to Further Drive Innovation, Bring Artificial General Intelligence to Pathology, and Expand Access to AI Applications -... - August 16th, 2024 [August 16th, 2024]
- Artificial General Intelligence, If Attained, Will Be the Greatest Invention of All Time - JD Supra - August 11th, 2024 [August 11th, 2024]
- OpenAI Touts New AI Safety Research. Critics Say Its a Good Step, but Not Enough - WIRED - July 22nd, 2024 [July 22nd, 2024]
- OpenAIs Project Strawberry Said to Be Building AI That Reasons and Does Deep Research - Singularity Hub - July 22nd, 2024 [July 22nd, 2024]
- One of the Best Ways to Invest in AI Is Dont - InvestorPlace - July 22nd, 2024 [July 22nd, 2024]
- OpenAI is plagued by safety concerns - The Verge - July 17th, 2024 [July 17th, 2024]
- OpenAI reportedly nears breakthrough with reasoning AI, reveals progress framework - Ars Technica - July 17th, 2024 [July 17th, 2024]
- ChatGPT maker OpenAI now has a scale to rank its AI - ReadWrite - July 17th, 2024 [July 17th, 2024]
- Heres how OpenAI will determine how powerful its AI systems are - The Verge - July 17th, 2024 [July 17th, 2024]
- OpenAI may be working on AI that can perform research without human help which should go fine - TechRadar - July 17th, 2024 [July 17th, 2024]
- OpenAI has a new scale for measuring how smart their AI models are becoming which is not as comforting as it should be - TechRadar - July 17th, 2024 [July 17th, 2024]
- OpenAI says there are 5 'levels' for AI to reach human intelligence it's already almost at level 2 - Quartz - July 17th, 2024 [July 17th, 2024]
- AIs Bizarro World, were marching towards AGI while carbon emissions soar - Fortune - July 17th, 2024 [July 17th, 2024]
- AI News Today July 15, 2024 - The Dales Report - July 17th, 2024 [July 17th, 2024]
- The Evolution Of Artificial Intelligence: From Basic AI To ASI - Welcome2TheBronx - July 17th, 2024 [July 17th, 2024]
- What Elon Musk and Ilya Sutskever Feared About OpenAI Is Becoming Reality - Observer - July 17th, 2024 [July 17th, 2024]
- Companies are losing faith in AI, and AI is losing money - Android Headlines - July 17th, 2024 [July 17th, 2024]
- AGI isn't here (yet): How to make informed, strategic decisions in the meantime - VentureBeat - June 16th, 2024 [June 16th, 2024]
- Apple's AI Privacy Measures, Elon Musk's Robot Prediction, And More: This Week In Artificial Intelligence - Alphabet ... - Benzinga - June 16th, 2024 [June 16th, 2024]
- AGI and jumping to the New Inference Market S-Curve - CMSWire - June 16th, 2024 [June 16th, 2024]
- Apple's big AI announcements were all about AI 'for the rest of us'Google, Meta, Amazon and, yes, OpenAI should ... - Fortune - June 16th, 2024 [June 16th, 2024]
- Elon Musk Withdraws His Lawsuit Against OpenAI and Sam Altman - The New York Times - June 16th, 2024 [June 16th, 2024]
- Staying Ahead of the AI Train - ATD - June 16th, 2024 [June 16th, 2024]
- OpenAI disbands its AI risk mitigation team - - May 20th, 2024 [May 20th, 2024]
- BEYOND LOCAL: 'Noise' in the machine: Human differences in judgment lead to problems for AI - The Longmont Leader - May 20th, 2024 [May 20th, 2024]
- Machine Learning Researcher Links OpenAI to Drug-Fueled Sex Parties - Futurism - May 20th, 2024 [May 20th, 2024]
- What Is AI? How Artificial Intelligence Works (2024) - Shopify - May 20th, 2024 [May 20th, 2024]
- Vitalik Buterin says OpenAI's GPT-4 has passed the Turing test - Cointelegraph - May 20th, 2024 [May 20th, 2024]
- "I lost trust": Why the OpenAI team in charge of safeguarding humanity imploded - Vox.com - May 18th, 2024 [May 18th, 2024]
- 63% of surveyed Americans want government legislation to prevent super intelligent AI from ever being achieved - PC Gamer - May 18th, 2024 [May 18th, 2024]
- Top OpenAI researcher resigns, saying company prioritized 'shiny products' over AI safety - Fortune - May 18th, 2024 [May 18th, 2024]
- The revolution in artificial intelligence and artificial general intelligence - Washington Times - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Yahoo! Voices - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands safety team focused on risk of artificial intelligence causing 'human extinction' - New York Post - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Port Lavaca Wave - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Moore County News Press - May 18th, 2024 [May 18th, 2024]
- Generative AI Is Totally Shameless. I Want to Be It - WIRED - May 18th, 2024 [May 18th, 2024]
- OpenAI researcher resigns, claiming safety has taken a backseat to shiny products - The Verge - May 18th, 2024 [May 18th, 2024]
- Most of Surveyed Americans Do Not Want Super Intelligent AI - 80.lv - May 18th, 2024 [May 18th, 2024]
- A former OpenAI leader says safety has 'taken a backseat to shiny products' at the AI company - Winnipeg Free Press - May 18th, 2024 [May 18th, 2024]
- DeepMind CEO says Google to spend more than $100B on AGI despite hype - Cointelegraph - April 20th, 2024 [April 20th, 2024]
- Congressional panel outlines five guardrails for AI use in House - FedScoop - April 20th, 2024 [April 20th, 2024]
- The Potential and Perils of Advanced Artificial General Intelligence - elblog.pl - April 20th, 2024 [April 20th, 2024]
- DeepMind Head: Google AI Spending Could Exceed $100 Billion - PYMNTS.com - April 20th, 2024 [April 20th, 2024]
- Say hi to Tong Tong, world's first AGI child-image figure - ecns - April 20th, 2024 [April 20th, 2024]
- Silicon Scholars: AI and The Muslim Ummah - IslamiCity - April 20th, 2024 [April 20th, 2024]
- AI stocks aren't like the dot-com bubble. Here's why - Quartz - April 20th, 2024 [April 20th, 2024]
- AI vs. AGI: The Race for Performance, Battling the Cost? for NASDAQ:GOOG by Moshkelgosha - TradingView - April 20th, 2024 [April 20th, 2024]
- We've Been Here Before: AI Promised Humanlike Machines In 1958 - The Good Men Project - April 20th, 2024 [April 20th, 2024]
- Google will spend more than $100 billion on AI, exec says - Quartz - April 20th, 2024 [April 20th, 2024]
- Tech companies want to build artificial general intelligence. But who decides when AGI is attained? - The Bakersfield Californian - April 8th, 2024 [April 8th, 2024]
- Tech companies want to build artificial general intelligence. But who decides when AGI is attained? - The Caledonian-Record - April 8th, 2024 [April 8th, 2024]
- What is AGI and how is it different from AI? - ReadWrite - April 8th, 2024 [April 8th, 2024]
- Artificial intelligence in healthcare: defining the most common terms - HealthITAnalytics.com - April 8th, 2024 [April 8th, 2024]
- We're Focusing on the Wrong Kind of AI Apocalypse - TIME - April 8th, 2024 [April 8th, 2024]
- Xi Jinping's vision in supporting the artificial intelligence at home and abroad - Modern Diplomacy - April 8th, 2024 [April 8th, 2024]
- As 'The Matrix' turns 25, the chilling artificial intelligence (AI) projection at its core isn't as outlandish as it once seemed - TechRadar - April 8th, 2024 [April 8th, 2024]
- AI & robotics briefing: Why superintelligent AI won't sneak up on us - Nature.com - January 10th, 2024 [January 10th, 2024]