AI, Moloch, and the race to the bottom – IAI
Moloch is an evil, child-sacrificing god from the Hebrew bible, its name now is used to describe a pervasive dynamic between competing groups or individuals. Moloch describes when a locally optimum strategy, leads to negative effects on a wider scale. The addictive nature of social media, the mass of nuclear weapons on the planet, and the race towards a dangerous AI, all have Molochian dynamics to blame. Ken Mogi offers us hope of a way out.
With the rapid advancements of artificial intelligence systems, concerns are rising as to the future welfare of humans. There is this urgent question whether AI would make us well-off, equal, and empowered. As AI is deeply transformative, we need to observe well where we are heading, lest we should drive head-on to the wall or off the cliff with full speed.
One of the best scenarios for human civilization would be a world in which most of the work is done by AI, with humans comfortably enjoying a permanent vacation under the blessing of basic income generated by machines. One possible nightmare on the other hand would be the annihilation of the whole human species by malfunctioning AI, through widespread social unrest induced by AI generated misinformation and gaslighting or massacre by runaway killer robots.
The human brain works best when the dominant emotion is optimism. Creative people are typically hopeful. Wolfgang Amadeus Mozart famously composed an upbeat masterpiece shortly after his mother's death while staying together in Paris. With AI, therefore, the default option might be optimism. However, we cannot afford to preach a simplistic mantra of optimism, especially when the hard facts go against such a naive assumption. Indeed, the effects of AI on human lives is a subject requiring careful analysis, not something to be judged outright to be either black or white. The most likely outcome would lie somewhere in the fifty shades of grey of what AI could do to humans from here.
___
Moloch has come to signify a condition in which we humans are coerced to make futile efforts and compete with each other in such ways that we are eventually driven to our demise.
___
The idea that newly emerging technologies would makes us more enlightened and better-off is sometimes called the Californian Ideology. Companies such as Google, Facebook, Apple, and Microsoft are often perceived to be proponents of this worldview. Now that AI research companies such as DeepMind and OpenAI are joining the bandwagon, it is high time we estimated the possible effects of artificial intelligence on humans rather seriously.
One of the critical, and perhaps surprisingly true-to-life, concepts concerning the dark side of AI is Moloch. Historically the name of a deity demanding unreasonable sacrifice based on often irritatingly trivial purposes, Moloch has come to signify a condition in which we humans are coerced to make futile efforts and compete with each other in such ways that we are eventually driven to our demise. In the near future, we might be induced to be in a race to the bottom by AI, without realizing the terrible situation.
In the more technical context of AI research, Moloch is an umbrella term acknowledging the difficulty of aligning artificial intelligence systems in such a way to promote human welfare. Max Tegmark, a MIT physicist who has been vocal in warning of the dangers of AI, often cite Moloch to discuss negative AI effects brought upon humanity. As AI researcher Eliezer Yudkowsky asserts, safely aligning a powerful AGI (artificial general intelligence) is difficult.
It is not hard to see why we might have to be beware of Moloch as AI systems increasingly influence our everyday lives. Some argue that the social media were our first serious encounter with AI, as the algorithms came to dominate our experience on platforms such as Twitter, YouTube, Facebook, and TikTok. Depending on our past browsing records, the algorithms (which are forms of AI) would determine what we view on our computer or smartphone. As a user, it is often difficult to get free from this algorithm-induced echo chamber.
Those competing in the attention economy would try to optimize their posts to be favored by the algorithm. The result is often literally a race to the bottom, in terms of quality of contents and user experience. We hear horror stories of teenagers resorting to evermore extreme and possibly self-harming ways of expression on social media. The tyranny of algorithm is a toolbox used by Moloch in today's world. Even if there are occasional silver linings, such as genuinely great contents emerging from competition on the social media, the cloud of dehumanizing attention-grabbing race is too dire to be ignored, especially for the young and immature.
___
The tyranny of algorithm is a toolbox used by Moloch in today's world.
___
The ultimate form of Moloch would be the so-called existential risk. Elon Musk once famously tweeted that AI was "potentially more dangerous than nukes." The comparison with nuclear weapons might actually help us understand why and how AI could entangle us in a race to the bottom, where Moloch would await to devour and destroy humanity.
Nuclear weapons are terrible. They bring death and destruction literally at the push of a button. Some argue, paradoxically, that nuclear weapons have helped the humans maintain peace after the Second World War. Indeed, this interpretation happens to be the standard credo in international politics today. Mutually Assured Destruction is the game theoretic analysis of how the presence of nukes might help peace keeping. If attack me, I would attack you back, and both of us would be destroyed. So do not attack. This is the simple logic of peace by nukes. This could be, however, a self-introduced Trojan Horse which would eventually bring about the end of the human race. Indeed, the acronym MAD is fitting for this particular instance of game theory. We are literally mad to assume that the presence of nukes would assure the sustainability of peace. Things could go terribly wrong, especially when artificial intelligence is introduced in the attack and defense processes.
In game theory, people's behaviors are assessed by an evaluation function, a hypothetical scoring scheme describing how good a particular situation is as the result of choices one makes. The Nash equilibrium describes a state where each player would be worse off in terms of the evaluation function by changing the strategy from the status quo, provided that other players do not alter theirs. Originally proposed by American mathematician John Nash, a Nash equilibrium does not necessarily mean that the present state is globally optimum. It could actually be a miserable trap. The human species would be better off if nuclear weapons were abolished, but it is difficult to achieve universal nuclear disarmament simultaneously. From game theoretic point of view, it does not make sense for a country like the U.K. to abandon its nuclear arsenal, while other nations keep weapons of mass destruction.
Moloch caused by AI is like MAD in the nuclear arms race, in that a particular evaluation function unreasonably dominates. In the attention economy craze on the social media, everyone would be better off if people just stopped optimizing for algorithms. However, if you quit, someone else would just occupy your niche, taking away the revenue. Therefore you keep doing it, remaining a hopeful monster to someday become a Mr. Beast. Thus, Moloch reigns through people's spontaneous submission to the Nash equilibrium, dictated by an evaluation function.
So how do we escape from the dystopia of Moloch? Is the jailbreak even possible?
Goodhart's law is a piece of wisdom we may adapt to escape the pitfall of Moloch. The adage, often stated as "when a measure becomes a target, it ceases to be a good measure", was due to Charles Goodhart, a British economist. Originally a sophisticated argument about how to handle monetary policy, Goodhart's law has resonance with a wide range of aspects in our daily lives. Simply put, following an evaluation function can sometimes be bad.
For example, it would be great to have a lot of money as a result of satisfying and rewarding life habits, but it would be a terrible mistake to try to make as much money as possible no matter what. Excellent academic performance as a fruit of curiosity driven investigations is great: Aiming at high grades at school for their own sake could stifle a child. It is one of life's ultimate blessings to fall in love with someone special: It would be stupid to count how many lovers you had. That is why the Catalogue Aria sung by Don Giovanni's servant Leporello is at best a superficial caricature of what human life is all about, although musically profoundly beautiful, coming from the genius of Mozart.
AI in general learns in an optimization process of some assigned evaluation function towards a goal. As a consequence, AI is most useful when the set goal makes sense. Moloch happens when the goal is ill-posed or too rigid.
___
The terrible truth about Moloch is that it is mediocre, never original, reflecting its origin of statistically optimized evaluation functions
___
Economist John Maynard Keynes once said the economy is driven by the animal spirits. The wonderful insight then is that as animals, we can always opt out of the optimization game. In order to escape the pitfall of Moloch, we need to become a black belt in applied Goodhart's law. When a measure becomes a target, it ceases to be a good measure. We can always update the evaluation function, or use a portfolio of different value systems simultaneously. A dystopia like the one depicted in George Orwell's Nineteen Eighty-Four is the result of taking a particular evaluation function too seriously. When a measure becomes a target, there would be a dystopia, and Moloch would reign. All work and no play makes Jack a dull boy. Trying to satisfy the dictates of the status quo only leads to uninteresting results. We don't have to pull the plug on AI. We can just ignore it. AI does not have this insight, but we humans do. At least some of us.
Being aware of Goodhart's law, we would be well advised to keep an educated distance from the suffocating workings of the evaluation functions in AI. The human brain allocates resources through the attentional system in the prefrontal cortex. If your attention is too focused on a particular evaluation function, your life would become rigid and narrow, encouraging Moloch. You should make more flexible and balanced uses of attention to things that really matter to you.
When watching YouTube or TikTok, rather than viewing videos and clips suggested by the algorithm and fall victim to the attention economy, you may opt to do an inner search. What are the things that come up to your mind when you look back on your childhood, for example? Are there things that tickles your interest from recent experiences in your life? If there are, search for them on the social media. You cannot entirely beat the algorithms, as the search results are formed by them, but you would have initiated a new path of investigation from your inner insights. Practicing mindfulness and making flexible uses of attention on your own interests and wants would be the best medicine against the symptoms of Moloch, because it makes your life's effective evaluation functions more broad and flexible. By making clever uses of your attention, you can improve your own life, and make the attention economy turn for the better, even if by a small step.
Flexible and balanced attention control would lead to more unique creativity, which would be highly valuable in an era marked by tsunamis of contents generated by AI. It is great to use ChatGPT, as long as you remember it is only a tool. Students might get along well by mastering prompt engineering to write academic essays. However, sentences generated by AI tend to be bland, if good enough to earn grades. Alternatively, you can write a prose entirely on your own, like I have been doing with this discussion of Moloch. What you write could be interesting only when you sometimes surprise the reader with twists and turns away from the norm, a quality currently lacking in generative AI.
The terrible truth about Moloch is that it is mediocre, never original, reflecting its origin of statistically optimized evaluation functions. Despite the advent of AI, the problem remains human, all too human. Algorithms do not have direct access to the inner workings of our brains. Attention is the only outlet of brain's computations. In order to pull this through, we need to be focused on the best in us, paying attention to nice things. If we learn to appreciate the truly beautiful, and distinguish genuine desires from superficial ones induced by social media, the spectre of Moloch would recede to our peripheral vision.
The wisdom is to keep being human, by making flexible, broad, and focused uses of the brain's attentional network. In choosing our focuses of attention, we are exercising our free will, in defiance of Moloch. Indeed, the new era of artificial intelligence could yet prove to be a new renaissance, with full blown blossoming of the potentials of humans, if only we knew what to attend to. As the 2017 paper by Google researchers which initiated the transformer revolution eventually leading to ChatGPT was famously titled, attention is all you need.
Here is the original post:
AI, Moloch, and the race to the bottom - IAI
- Nvidias (NVDA) CEO and Elite Scientists Say Artificial General Intelligence Is Already Here - TipRanks - November 10th, 2025 [November 10th, 2025]
- Artificial General Intelligence (AGI): the first global standard for measuring it has been defined - Red Hot Cyber - October 28th, 2025 [October 28th, 2025]
- Tech CEO Dan Herbatschek, a Mathematician Who Founded Ramsey Theory Group, Outlines Three Breakthroughs Essential for Achieving True Artificial... - October 17th, 2025 [October 17th, 2025]
- Artificial General Intelligence and The Slaveholder Mentality - Daily Kos - September 30th, 2025 [September 30th, 2025]
- Artificial General Intelligence Development: Bridging Theoretical Aspirations and Contemporary Enterprise Integration Frameworks - Tech Times - September 25th, 2025 [September 25th, 2025]
- Dyna Robotics Raises $120 Million to Advance Robotic Foundation Models on the Path to Physical Artificial General Intelligence - Yahoo Finance - September 21st, 2025 [September 21st, 2025]
- Dyna Robotics Raises $120 Million to Advance Robotic Foundation Models on the Path to Physical Artificial General Intelligence - PR Newswire - September 17th, 2025 [September 17th, 2025]
- "Physical Bodies Required for True Intelligence": AI Researchers Explore Whether Soft Robotics and Embodied Cognition Unlock Artificial... - September 13th, 2025 [September 13th, 2025]
- Report: The Road to Artificial General Intelligence: Achieving the Next Era of Intelligence - Semiconductor Engineering - September 11th, 2025 [September 11th, 2025]
- The Debate On Whether Artificial General Intelligence Should Inevitably Be Declared A Worldwide Public Good With Free Access For All - Forbes - September 11th, 2025 [September 11th, 2025]
- Prepare for the workplace impact of artificial general intelligence - it-online.co.za - September 3rd, 2025 [September 3rd, 2025]
- The Race for AGI: Why 2027 Is the Year We Could See Artificial General Intelligence - MSN - August 26th, 2025 [August 26th, 2025]
- OpenAI's head of people is leaving to make art about artificial general intelligence - MSN - August 26th, 2025 [August 26th, 2025]
- Godfather of AI warns artificial general intelligence may arrive years sooner than previously believed - MacDailyNews - August 16th, 2025 [August 16th, 2025]
- Meta is planning its fourth overhaul of AI operations in just six months, with CEO Mark Zuckerberg aiming to accelerate work toward artificial general... - August 16th, 2025 [August 16th, 2025]
- People Will Lose Their Minds When AI Such As Artificial General Intelligence Suffers Blackouts - Forbes - August 14th, 2025 [August 14th, 2025]
- ChatGPT edges towards artificial general intelligence with GPT-5 - Techgoondu - August 12th, 2025 [August 12th, 2025]
- Most of the GPT-5 Updates Are a Snooze. Wake Me When Artificial General Intelligence Arrives - PCMag - August 9th, 2025 [August 9th, 2025]
- Most of the GPT-5 Updates Are a Snooze. Wake Me When Artificial General Intelligence Arrives - PCMag Australia - August 9th, 2025 [August 9th, 2025]
- GPT-5 Is Not Artificial General Intelligence, but Heres Why It Is Crucial for OpenAIs Mission - Republic World - August 9th, 2025 [August 9th, 2025]
- Experts Discuss the Impact of Advanced Autonomy and Progress Toward Artificial General Intelligence - ePlaneAI - August 9th, 2025 [August 9th, 2025]
- DeepMind's Genie 3: A Milestone on the Path to Artificial General Intelligence - AInvest - August 7th, 2025 [August 7th, 2025]
- After months of mounting anticipation, OpenAI officially launched GPT-5 on Thursday, calling it a major leap in its mission toward Artificial General... - August 7th, 2025 [August 7th, 2025]
- Computer Architecture Extending The Von Neumann Model With A Dedicated Reasoning Unit For Native Artificial General Intelligence(TU Munich, Pace U.) -... - July 24th, 2025 [July 24th, 2025]
- Artificial General Intelligence: What is It, and Which Companies Are Leading the Way? - CMC Markets - July 18th, 2025 [July 18th, 2025]
- James Cameron says the reality of artificial general intelligence is 'scarier' than the fiction of it - AOL.com - July 2nd, 2025 [July 2nd, 2025]
- Artificial General Intelligence Explained: When Will AI Be Smarter Than Us? | Behind the Numbers - eMarketer - July 2nd, 2025 [July 2nd, 2025]
- Is Artificial General Intelligence (AGI) Closer Than We Think? - Vocal - June 29th, 2025 [June 29th, 2025]
- Microsoft and OpenAI dueling over artificial general intelligence, The Information reports - MSN - June 29th, 2025 [June 29th, 2025]
- Viewpoint: How AGI (artificial general intelligence) threatens to undermine what it means to be human - Genetic Literacy Project - June 28th, 2025 [June 28th, 2025]
- These two game-changing breakthroughs advance us toward artificial general intelligence - Fast Company - June 28th, 2025 [June 28th, 2025]
- Microsoft and OpenAI dueling over artificial general intelligence, The Information reports By Reuters - Investing.com - June 28th, 2025 [June 28th, 2025]
- OpenAI And Microsoft Reportedly At Odds Over Access To Artificial General Intelligence: 'Talks Are Ongoing And We Are Optimistic' - Benzinga - June 26th, 2025 [June 26th, 2025]
- Is Artificial General Intelligence Here? - Behind The News - Australian Broadcasting Corporation - June 24th, 2025 [June 24th, 2025]
- Did Apples Recent Illusion of Thinking Study Expose Fatal Shortcomings in Using LLMs for Artificial General Intelligence? - Economist Writing Every... - June 20th, 2025 [June 20th, 2025]
- On the construction of artificial general intelligence based on the correspondence between goals and means - Frontiers - June 20th, 2025 [June 20th, 2025]
- The Ardent Belief That Artificial General Intelligence Will Bring Us Infinite Einsteins - Forbes - June 10th, 2025 [June 10th, 2025]
- Mark Zuckerberg is assembling a team of experts to achieve artificial general intelligence - iblnews.org - June 10th, 2025 [June 10th, 2025]
- 'Foolhardy at best, and deceptive and dangerous at worst': Don't believe the hype here's why artificial general intelligence isn't what the... - June 7th, 2025 [June 7th, 2025]
- Mind-Bending New Inventions That Artificial General Intelligence Might Discover For The Sake Of Humanity - Forbes - June 7th, 2025 [June 7th, 2025]
- Why AI-As-Coder Is Said To Be The Fastest Path Toward Reaching Artificial General Intelligence - Forbes - June 7th, 2025 [June 7th, 2025]
- Artificial General Intelligence in Competition and War - RealClearDefense - May 11th, 2025 [May 11th, 2025]
- OpenAI CFO Sarah Friar on the race to build artificial general intelligence - Goldman Sachs - April 16th, 2025 [April 16th, 2025]
- Artificial General Intelligence (AGI) Progress & The Road to ASI - Crowe - April 16th, 2025 [April 16th, 2025]
- What is artificial general intelligence and how does it differ from other types of AI? - Tech Xplore - April 5th, 2025 [April 5th, 2025]
- DeepMind predicts arrival of artificial general intelligence by 2030, warns of potential existential threat to humanity - BizzBuzz - April 5th, 2025 [April 5th, 2025]
- Stop the World: The road to artificial general intelligence, with Helen Toner - | Australian Strategic Policy Institute | ASPI - April 5th, 2025 [April 5th, 2025]
- Artificial General Intelligence: The Next Frontier in AI - The Villager Newspaper - April 3rd, 2025 [April 3rd, 2025]
- Prominent transhumanist on Artificial General Intelligence: We must stop everything. We are not ready. - All Israel News - March 22nd, 2025 [March 22nd, 2025]
- Researchers want to give some common sense to AI to turn it into artificial general intelligence - MSN - March 22nd, 2025 [March 22nd, 2025]
- The AI Obsession: Why Chasing Artificial General Intelligence is a Misguided Dream - Macnifico.pt - March 18th, 2025 [March 18th, 2025]
- Navigating artificial general intelligence development: societal, technological, ethical, and brain-inspired pathways - Nature.com - March 13th, 2025 [March 13th, 2025]
- We meet the protesters who want to ban Artificial General Intelligence before it even exists - The Register - February 23rd, 2025 [February 23rd, 2025]
- How Artificial General Intelligence (AGI) is likely to transform manufacturing in the next 10 years - Wire19 - February 11th, 2025 [February 11th, 2025]
- How Artificial General Intelligence is likely to transform manufacturing in the next 10 years - ET Manufacturing - February 11th, 2025 [February 11th, 2025]
- How Do You Get to Artificial General Intelligence? Think Lighter - WIRED - November 28th, 2024 [November 28th, 2024]
- How much time do we have before Artificial General Intelligence (AGI) to turns into Artificial Self-preserving - The Times of India - November 5th, 2024 [November 5th, 2024]
- Simuli to Leap Forward in the Trek to Artificial General Intelligence through 2027 Hyperdimensional AI Ecosystem - USA TODAY - November 5th, 2024 [November 5th, 2024]
- Implications of Artificial General Intelligence on National and International Security - Yoshua Bengio - - October 31st, 2024 [October 31st, 2024]
- James Cameron says the reality of artificial general intelligence is 'scarier' than the fiction of it - Business Insider - October 31st, 2024 [October 31st, 2024]
- James Cameron says the reality of artificial general intelligence is 'scarier' than the fiction of it - MSN - October 31st, 2024 [October 31st, 2024]
- Bot fresh hell is this?: Inside the rise of Artificial General Intelligence or AGI - MSN - October 31st, 2024 [October 31st, 2024]
- Artificial General Intelligence (AGI) Market to Reach $26.9 Billion by 2031 As Revealed In New Report - WhaTech - September 26th, 2024 [September 26th, 2024]
- 19 jobs artificial general intelligence (AGI) may replace and 10 jobs it could create - MSN - September 26th, 2024 [September 26th, 2024]
- Paige Appoints New Leadership to Further Drive Innovation, Bring Artificial General Intelligence to Pathology, and Expand Access to AI Applications -... - August 16th, 2024 [August 16th, 2024]
- Artificial General Intelligence, If Attained, Will Be the Greatest Invention of All Time - JD Supra - August 11th, 2024 [August 11th, 2024]
- OpenAI Touts New AI Safety Research. Critics Say Its a Good Step, but Not Enough - WIRED - July 22nd, 2024 [July 22nd, 2024]
- OpenAIs Project Strawberry Said to Be Building AI That Reasons and Does Deep Research - Singularity Hub - July 22nd, 2024 [July 22nd, 2024]
- One of the Best Ways to Invest in AI Is Dont - InvestorPlace - July 22nd, 2024 [July 22nd, 2024]
- OpenAI is plagued by safety concerns - The Verge - July 17th, 2024 [July 17th, 2024]
- OpenAI reportedly nears breakthrough with reasoning AI, reveals progress framework - Ars Technica - July 17th, 2024 [July 17th, 2024]
- ChatGPT maker OpenAI now has a scale to rank its AI - ReadWrite - July 17th, 2024 [July 17th, 2024]
- Heres how OpenAI will determine how powerful its AI systems are - The Verge - July 17th, 2024 [July 17th, 2024]
- OpenAI may be working on AI that can perform research without human help which should go fine - TechRadar - July 17th, 2024 [July 17th, 2024]
- OpenAI has a new scale for measuring how smart their AI models are becoming which is not as comforting as it should be - TechRadar - July 17th, 2024 [July 17th, 2024]
- OpenAI says there are 5 'levels' for AI to reach human intelligence it's already almost at level 2 - Quartz - July 17th, 2024 [July 17th, 2024]
- AIs Bizarro World, were marching towards AGI while carbon emissions soar - Fortune - July 17th, 2024 [July 17th, 2024]
- AI News Today July 15, 2024 - The Dales Report - July 17th, 2024 [July 17th, 2024]
- The Evolution Of Artificial Intelligence: From Basic AI To ASI - Welcome2TheBronx - July 17th, 2024 [July 17th, 2024]
- What Elon Musk and Ilya Sutskever Feared About OpenAI Is Becoming Reality - Observer - July 17th, 2024 [July 17th, 2024]