Opinion | We Need a Manhattan Project for AI Safety – POLITICO
At the heart of the threat is whats called the alignment problem the idea that a powerful computer brain might no longer be aligned with the best interests of human beings. Unlike fairness, or job loss, there arent obvious policy solutions to alignment. Its a highly technical problem that some experts fear may never be solvable. But the government does have a role to play in confronting massive, uncertain problems like this. In fact, it may be the most important role it can play on AI: to fund a research project on the scale it deserves.
Theres a successful precedent for this: The Manhattan Project was one of the most ambitious technological undertakings of the 20th century. At its peak, 129,000 people worked on the project at sites across the United States and Canada. They were trying to solve a problem that was critical to national security, and which nobody was sure could be solved: how to harness nuclear power to build a weapon.
Some eight decades later, the need has arisen for a government research project that matches the original Manhattan Projects scale and urgency. In some ways the goal is exactly the opposite of the first Manhattan Project, which opened the door to previously unimaginable destruction. This time, the goal must be to prevent unimaginable destruction, as well as merely difficult-to-anticipate destruction.
Dont just take it from me. Expert opinion only differs over whether the risks from AI are unprecedentedly large or literally existential.
Even the scientists who set the groundwork for todays AI models are sounding the alarm. Most recently, the Godfather of AI himself, Geoffrey Hinton, quit his post at Google to call attention to the risks AI poses to humanity.
That may sound like science fiction, but its a reality that is rushing toward us faster than almost anyone anticipated. Today, progress in AI is measured in days and weeks, not months and years.
As little as two years ago, the forecasting platform Metaculus put the likely arrival of weak artificial general intelligence a unified system that can compete with the typical college-educated human on most tasks sometime around the year 2040.
Now forecasters anticipate AGI will arrive in 2026. Strong AGIs with robotic capabilities that match or surpass most humans are forecasted to emerge just five years later. With the ability to automate AI research itself, the next milestone would be a superintelligence with unfathomable power.
Dont count on the normal channels of government to save us from that.
Policymakers cannot afford a drawn-out interagency process or notice and comment period to prepare for whats coming. On the contrary, making the most of AIs tremendous upside while heading off catastrophe will require our government to stop taking a backseat role and act with a nimbleness not seen in generations. Hence the need for a new Manhattan Project.
A Manhattan Project for X is one of those clichs of American politics that seldom merits the hype. AI is the rare exception. Ensuring AGI develops safely and for the betterment of humanity will require public investment into focused research, high levels of public and private coordination and a leader with the tenacity of General Leslie Groves the projects infamous overseer, whose aggressive, top-down leadership style mirrored that of a modern tech CEO.
Ensuring AGI develops safely and for the betterment of humanity will require a leader with the tenacity of General Leslie Groves, Hammond writes.|AP Photo
Im not the only person to suggest it: AI thinker Gary Marcus and the legendary computer scientist Judea Pearl recently endorsed the idea as well, at least informally. But what exactly would that look like in practice?
Fortunately, we already know quite a bit about the problem and can sketch out the tools we need to tackle it.
One issue is that large neural networks like GPT-4 the generative AIs that are causing the most concern right now are mostly a black box, with reasoning processes we cant yet fully understand or control. But with the right setup, researchers can in principle run experiments that uncover particular circuits hidden within the billions of connections. This is known as mechanistic interpretability research, and its the closest thing we have to neuroscience for artificial brains.
Unfortunately, the field is still young, and far behind in its understanding of how current models do what they do. The ability to run experiments on large, unrestricted models is mostly reserved for researchers within the major AI companies. The dearth of opportunities in mechanistic interpretability and alignment research is a classic public goods problem. Training large AI models costs millions of dollars in cloud computing services, especially if one iterates through different configurations. The private AI labs are thus hesitant to burn capital on training models with no commercial purpose. Government-funded data centers, in contrast, would be under no obligation to return value to shareholders, and could provide free computing resources to thousands of potential researchers with ideas to contribute.
The government could also ensure research proceeds in relative safety and provide a central connection for experts to share their knowledge.
With all that in mind, a Manhattan Project for AI safety should have at least 5 core functions:
1. It would serve a coordination role, pulling together the leadership of the top AI companies OpenAI and its chief competitors, Anthropic and Google DeepMind to disclose their plans in confidence, develop shared safety protocols and forestall the present arms-race dynamic.
2. It would draw on their talent and expertise to accelerate the construction of government-owned data centers managed under the highest security, including an air gap, a deliberate disconnection from outside networks, ensuring that future, more powerful AIs are unable to escape onto the open internet. Such facilities would likely be overseen by the Department of Energys Artificial Intelligence and Technology Office, given its existing mission to accelerate the demonstration of trustworthy AI.
3. It would compel the participating companies to collaborate on safety and alignment research, and require models that pose safety risks to be trained and extensively tested in secure facilities.
4. It would provide public testbeds for academic researchers and other external scientists to study the innards of large models like GPT-4, greatly building on existing initiatives like the National AI Research Resource and helping to grow the nascent field of AI interpretability.
5. And it would provide a cloud platform for training advanced AI models for within-government needs, ensuring the privacy of sensitive government data and serving as a hedge against runaway corporate power.
The alternative to a massive public effort like this attempting to kick the can on the AI problem wont cut it.
The only other serious proposal right now is a pause on new AI development, and even many tech skeptics see that as unrealistic. It may even be counterproductive. Our understanding of how powerful AI systems could go rogue is immature at best, but stands to improve greatly through continued testing, especially of larger models. Air-gapped data centers will thus be essential for experimenting with AI failure modes in a secured setting. This includes pushing models to their limits to explore potentially dangerous emergent behaviors, such as deceptiveness or power-seeking.
The Manhattan Project analogy is not perfect, but it helps to draw a contrast with those who argue that AI safety requires pausing research into more powerful models altogether. The project didnt seek to decelerate the construction of atomic weaponry, but to master it.
Even if AGIs end up being farther off than most experts expect, a Manhattan Project for AI safety is unlikely to go to waste. Indeed, many less-than-existential AI risks are already upon us, crying out for aggressive research into mitigation and adaptation strategies. So what are we waiting for?
More here:
Opinion | We Need a Manhattan Project for AI Safety - POLITICO
- Prepare for the workplace impact of artificial general intelligence - it-online.co.za - September 3rd, 2025 [September 3rd, 2025]
- The Race for AGI: Why 2027 Is the Year We Could See Artificial General Intelligence - MSN - August 26th, 2025 [August 26th, 2025]
- OpenAI's head of people is leaving to make art about artificial general intelligence - MSN - August 26th, 2025 [August 26th, 2025]
- Godfather of AI warns artificial general intelligence may arrive years sooner than previously believed - MacDailyNews - August 16th, 2025 [August 16th, 2025]
- Meta is planning its fourth overhaul of AI operations in just six months, with CEO Mark Zuckerberg aiming to accelerate work toward artificial general... - August 16th, 2025 [August 16th, 2025]
- People Will Lose Their Minds When AI Such As Artificial General Intelligence Suffers Blackouts - Forbes - August 14th, 2025 [August 14th, 2025]
- ChatGPT edges towards artificial general intelligence with GPT-5 - Techgoondu - August 12th, 2025 [August 12th, 2025]
- Most of the GPT-5 Updates Are a Snooze. Wake Me When Artificial General Intelligence Arrives - PCMag - August 9th, 2025 [August 9th, 2025]
- Most of the GPT-5 Updates Are a Snooze. Wake Me When Artificial General Intelligence Arrives - PCMag Australia - August 9th, 2025 [August 9th, 2025]
- GPT-5 Is Not Artificial General Intelligence, but Heres Why It Is Crucial for OpenAIs Mission - Republic World - August 9th, 2025 [August 9th, 2025]
- Experts Discuss the Impact of Advanced Autonomy and Progress Toward Artificial General Intelligence - ePlaneAI - August 9th, 2025 [August 9th, 2025]
- DeepMind's Genie 3: A Milestone on the Path to Artificial General Intelligence - AInvest - August 7th, 2025 [August 7th, 2025]
- After months of mounting anticipation, OpenAI officially launched GPT-5 on Thursday, calling it a major leap in its mission toward Artificial General... - August 7th, 2025 [August 7th, 2025]
- Computer Architecture Extending The Von Neumann Model With A Dedicated Reasoning Unit For Native Artificial General Intelligence(TU Munich, Pace U.) -... - July 24th, 2025 [July 24th, 2025]
- Artificial General Intelligence: What is It, and Which Companies Are Leading the Way? - CMC Markets - July 18th, 2025 [July 18th, 2025]
- James Cameron says the reality of artificial general intelligence is 'scarier' than the fiction of it - AOL.com - July 2nd, 2025 [July 2nd, 2025]
- Artificial General Intelligence Explained: When Will AI Be Smarter Than Us? | Behind the Numbers - eMarketer - July 2nd, 2025 [July 2nd, 2025]
- Is Artificial General Intelligence (AGI) Closer Than We Think? - Vocal - June 29th, 2025 [June 29th, 2025]
- Microsoft and OpenAI dueling over artificial general intelligence, The Information reports - MSN - June 29th, 2025 [June 29th, 2025]
- Viewpoint: How AGI (artificial general intelligence) threatens to undermine what it means to be human - Genetic Literacy Project - June 28th, 2025 [June 28th, 2025]
- These two game-changing breakthroughs advance us toward artificial general intelligence - Fast Company - June 28th, 2025 [June 28th, 2025]
- Microsoft and OpenAI dueling over artificial general intelligence, The Information reports By Reuters - Investing.com - June 28th, 2025 [June 28th, 2025]
- OpenAI And Microsoft Reportedly At Odds Over Access To Artificial General Intelligence: 'Talks Are Ongoing And We Are Optimistic' - Benzinga - June 26th, 2025 [June 26th, 2025]
- Is Artificial General Intelligence Here? - Behind The News - Australian Broadcasting Corporation - June 24th, 2025 [June 24th, 2025]
- Did Apples Recent Illusion of Thinking Study Expose Fatal Shortcomings in Using LLMs for Artificial General Intelligence? - Economist Writing Every... - June 20th, 2025 [June 20th, 2025]
- On the construction of artificial general intelligence based on the correspondence between goals and means - Frontiers - June 20th, 2025 [June 20th, 2025]
- The Ardent Belief That Artificial General Intelligence Will Bring Us Infinite Einsteins - Forbes - June 10th, 2025 [June 10th, 2025]
- Mark Zuckerberg is assembling a team of experts to achieve artificial general intelligence - iblnews.org - June 10th, 2025 [June 10th, 2025]
- 'Foolhardy at best, and deceptive and dangerous at worst': Don't believe the hype here's why artificial general intelligence isn't what the... - June 7th, 2025 [June 7th, 2025]
- Mind-Bending New Inventions That Artificial General Intelligence Might Discover For The Sake Of Humanity - Forbes - June 7th, 2025 [June 7th, 2025]
- Why AI-As-Coder Is Said To Be The Fastest Path Toward Reaching Artificial General Intelligence - Forbes - June 7th, 2025 [June 7th, 2025]
- Artificial General Intelligence in Competition and War - RealClearDefense - May 11th, 2025 [May 11th, 2025]
- OpenAI CFO Sarah Friar on the race to build artificial general intelligence - Goldman Sachs - April 16th, 2025 [April 16th, 2025]
- Artificial General Intelligence (AGI) Progress & The Road to ASI - Crowe - April 16th, 2025 [April 16th, 2025]
- What is artificial general intelligence and how does it differ from other types of AI? - Tech Xplore - April 5th, 2025 [April 5th, 2025]
- DeepMind predicts arrival of artificial general intelligence by 2030, warns of potential existential threat to humanity - BizzBuzz - April 5th, 2025 [April 5th, 2025]
- Stop the World: The road to artificial general intelligence, with Helen Toner - | Australian Strategic Policy Institute | ASPI - April 5th, 2025 [April 5th, 2025]
- Artificial General Intelligence: The Next Frontier in AI - The Villager Newspaper - April 3rd, 2025 [April 3rd, 2025]
- Prominent transhumanist on Artificial General Intelligence: We must stop everything. We are not ready. - All Israel News - March 22nd, 2025 [March 22nd, 2025]
- Researchers want to give some common sense to AI to turn it into artificial general intelligence - MSN - March 22nd, 2025 [March 22nd, 2025]
- The AI Obsession: Why Chasing Artificial General Intelligence is a Misguided Dream - Macnifico.pt - March 18th, 2025 [March 18th, 2025]
- Navigating artificial general intelligence development: societal, technological, ethical, and brain-inspired pathways - Nature.com - March 13th, 2025 [March 13th, 2025]
- We meet the protesters who want to ban Artificial General Intelligence before it even exists - The Register - February 23rd, 2025 [February 23rd, 2025]
- How Artificial General Intelligence (AGI) is likely to transform manufacturing in the next 10 years - Wire19 - February 11th, 2025 [February 11th, 2025]
- How Artificial General Intelligence is likely to transform manufacturing in the next 10 years - ET Manufacturing - February 11th, 2025 [February 11th, 2025]
- How Do You Get to Artificial General Intelligence? Think Lighter - WIRED - November 28th, 2024 [November 28th, 2024]
- How much time do we have before Artificial General Intelligence (AGI) to turns into Artificial Self-preserving - The Times of India - November 5th, 2024 [November 5th, 2024]
- Simuli to Leap Forward in the Trek to Artificial General Intelligence through 2027 Hyperdimensional AI Ecosystem - USA TODAY - November 5th, 2024 [November 5th, 2024]
- Implications of Artificial General Intelligence on National and International Security - Yoshua Bengio - - October 31st, 2024 [October 31st, 2024]
- James Cameron says the reality of artificial general intelligence is 'scarier' than the fiction of it - Business Insider - October 31st, 2024 [October 31st, 2024]
- James Cameron says the reality of artificial general intelligence is 'scarier' than the fiction of it - MSN - October 31st, 2024 [October 31st, 2024]
- Bot fresh hell is this?: Inside the rise of Artificial General Intelligence or AGI - MSN - October 31st, 2024 [October 31st, 2024]
- Artificial General Intelligence (AGI) Market to Reach $26.9 Billion by 2031 As Revealed In New Report - WhaTech - September 26th, 2024 [September 26th, 2024]
- 19 jobs artificial general intelligence (AGI) may replace and 10 jobs it could create - MSN - September 26th, 2024 [September 26th, 2024]
- Paige Appoints New Leadership to Further Drive Innovation, Bring Artificial General Intelligence to Pathology, and Expand Access to AI Applications -... - August 16th, 2024 [August 16th, 2024]
- Artificial General Intelligence, If Attained, Will Be the Greatest Invention of All Time - JD Supra - August 11th, 2024 [August 11th, 2024]
- OpenAI Touts New AI Safety Research. Critics Say Its a Good Step, but Not Enough - WIRED - July 22nd, 2024 [July 22nd, 2024]
- OpenAIs Project Strawberry Said to Be Building AI That Reasons and Does Deep Research - Singularity Hub - July 22nd, 2024 [July 22nd, 2024]
- One of the Best Ways to Invest in AI Is Dont - InvestorPlace - July 22nd, 2024 [July 22nd, 2024]
- OpenAI is plagued by safety concerns - The Verge - July 17th, 2024 [July 17th, 2024]
- OpenAI reportedly nears breakthrough with reasoning AI, reveals progress framework - Ars Technica - July 17th, 2024 [July 17th, 2024]
- ChatGPT maker OpenAI now has a scale to rank its AI - ReadWrite - July 17th, 2024 [July 17th, 2024]
- Heres how OpenAI will determine how powerful its AI systems are - The Verge - July 17th, 2024 [July 17th, 2024]
- OpenAI may be working on AI that can perform research without human help which should go fine - TechRadar - July 17th, 2024 [July 17th, 2024]
- OpenAI has a new scale for measuring how smart their AI models are becoming which is not as comforting as it should be - TechRadar - July 17th, 2024 [July 17th, 2024]
- OpenAI says there are 5 'levels' for AI to reach human intelligence it's already almost at level 2 - Quartz - July 17th, 2024 [July 17th, 2024]
- AIs Bizarro World, were marching towards AGI while carbon emissions soar - Fortune - July 17th, 2024 [July 17th, 2024]
- AI News Today July 15, 2024 - The Dales Report - July 17th, 2024 [July 17th, 2024]
- The Evolution Of Artificial Intelligence: From Basic AI To ASI - Welcome2TheBronx - July 17th, 2024 [July 17th, 2024]
- What Elon Musk and Ilya Sutskever Feared About OpenAI Is Becoming Reality - Observer - July 17th, 2024 [July 17th, 2024]
- Companies are losing faith in AI, and AI is losing money - Android Headlines - July 17th, 2024 [July 17th, 2024]
- AGI isn't here (yet): How to make informed, strategic decisions in the meantime - VentureBeat - June 16th, 2024 [June 16th, 2024]
- Apple's AI Privacy Measures, Elon Musk's Robot Prediction, And More: This Week In Artificial Intelligence - Alphabet ... - Benzinga - June 16th, 2024 [June 16th, 2024]
- AGI and jumping to the New Inference Market S-Curve - CMSWire - June 16th, 2024 [June 16th, 2024]
- Apple's big AI announcements were all about AI 'for the rest of us'Google, Meta, Amazon and, yes, OpenAI should ... - Fortune - June 16th, 2024 [June 16th, 2024]
- Elon Musk Withdraws His Lawsuit Against OpenAI and Sam Altman - The New York Times - June 16th, 2024 [June 16th, 2024]
- Staying Ahead of the AI Train - ATD - June 16th, 2024 [June 16th, 2024]
- OpenAI disbands its AI risk mitigation team - - May 20th, 2024 [May 20th, 2024]
- BEYOND LOCAL: 'Noise' in the machine: Human differences in judgment lead to problems for AI - The Longmont Leader - May 20th, 2024 [May 20th, 2024]
- Machine Learning Researcher Links OpenAI to Drug-Fueled Sex Parties - Futurism - May 20th, 2024 [May 20th, 2024]