Moloch is an evil, child-sacrificing god from the Hebrew    bible, its name now is used to describe a pervasive dynamic    between competing groups or individuals. Moloch describes when    a locally optimum strategy, leads to negative effects on a    wider scale. The addictive nature of social media, the mass of    nuclear weapons on the planet, and the race towards a dangerous    AI, all have Molochian dynamics to blame. Ken Mogi offers us    hope of a way out.  
    With the rapid advancements of artificial intelligence systems,    concerns are rising as to the future welfare of humans. There    is this urgent question whether AI would make us well-off,    equal, and empowered. As AI is deeply transformative, we need    to observe well where we are heading, lest we should drive    head-on to the wall or off the cliff with full speed.  
    One of the best scenarios for human civilization would be a    world in which most of the work is done by AI, with humans    comfortably enjoying a permanent vacation under the blessing of    basic income generated by machines. One possible nightmare on    the other hand would be the annihilation of the whole human    species by malfunctioning AI, through widespread social unrest    induced by AI generated misinformation and gaslighting or    massacre by runaway killer robots.  
    The human brain works best when the dominant emotion is    optimism. Creative people are typically hopeful. Wolfgang    Amadeus Mozart famously composed an upbeat masterpiece shortly    after his mother's death while staying together in Paris. With    AI, therefore, the default option might be optimism. However,    we cannot afford to preach a simplistic mantra of optimism,    especially when the hard facts go against such a naive    assumption. Indeed, the effects of AI on human lives is a    subject requiring careful analysis, not something to be judged    outright to be either black or white. The most likely outcome    would lie somewhere in the fifty shades of grey of what AI    could do to humans from here.  
    ___  
    Moloch has come to signify a condition in which we humans are    coerced to make futile efforts and compete with each other in    such ways that we are eventually driven to our demise.  
    ___  
    The idea that newly emerging technologies would makes us more    enlightened and better-off is sometimes called the Californian    Ideology. Companies such as Google, Facebook, Apple, and    Microsoft are often perceived to be proponents of this    worldview. Now that AI research companies such as DeepMind and    OpenAI are joining the bandwagon, it is high time we estimated    the possible effects of artificial intelligence on humans    rather seriously.  
    One of the critical, and perhaps surprisingly true-to-life,    concepts concerning the dark side of AI is Moloch.    Historically the name of a deity demanding unreasonable    sacrifice based on often irritatingly trivial purposes,    Moloch has come to signify a condition in which we    humans are coerced to make futile efforts and compete with each    other in such ways that we are eventually driven to our    demise. In the near future, we might be induced to be    in a race to the bottom by AI, without realizing the terrible    situation.  
    In the more technical context of AI research, Moloch is an    umbrella term acknowledging the difficulty of aligning    artificial intelligence systems in such a way to promote human    welfare. Max Tegmark, a MIT physicist who has been vocal in    warning of the dangers of AI, often cite Moloch to discuss    negative AI effects brought upon humanity. As AI researcher    Eliezer Yudkowsky asserts, safely aligning a powerful AGI    (artificial general intelligence) is difficult.  
    It is not hard to see why we might have to be beware of Moloch    as AI systems increasingly influence our everyday lives. Some    argue that the social media were our first serious encounter    with AI, as the algorithms came to dominate our experience on    platforms such as Twitter, YouTube, Facebook, and TikTok.    Depending on our past browsing records, the algorithms (which    are forms of AI) would determine what we view on our computer    or smartphone. As a user, it is often difficult to get free    from this algorithm-induced echo chamber.  
    Those competing in the attention economy would try to optimize    their posts to be favored by the algorithm. The result is often    literally a race to the bottom, in terms of quality of contents    and user experience. We hear horror stories of teenagers    resorting to evermore extreme and possibly self-harming ways of    expression on social media. The tyranny of algorithm is    a toolbox used by Moloch in today's world. Even if    there are occasional silver linings, such as genuinely great    contents emerging from competition on the social media, the    cloud of dehumanizing attention-grabbing race is too dire to be    ignored, especially for the young and immature.  
    ___  
    The tyranny of algorithm is a toolbox used by Moloch in today's    world.  
    ___  
    The ultimate form of Moloch would be the so-called existential    risk. Elon Musk once famously tweeted that AI was "potentially    more dangerous than nukes." The comparison with nuclear weapons    might actually help us understand why and how AI could entangle    us in a race to the bottom, where Moloch would await to devour    and destroy humanity.  
    Nuclear weapons are terrible. They bring death and destruction    literally at the push of a button. Some argue, paradoxically,    that nuclear weapons have helped the humans maintain peace    after the Second World War. Indeed, this interpretation happens    to be the standard credo in international politics today.    Mutually Assured Destruction is the game theoretic    analysis of how the presence of nukes might help peace keeping.    If attack me, I would attack you back, and both of us would be    destroyed. So do not attack. This is the simple logic of peace    by nukes. This could be, however, a self-introduced Trojan    Horse which would eventually bring about the end of the human    race. Indeed, the acronym MAD is fitting for this particular    instance of game theory. We are literally mad to    assume that the presence of nukes would assure the    sustainability of peace. Things could go terribly wrong,    especially when artificial intelligence is introduced in the    attack and defense processes.  
    In game theory, people's behaviors are assessed by an    evaluation function, a hypothetical scoring scheme    describing how good a particular situation is as the result of    choices one makes. The Nash equilibrium describes a state where    each player would be worse off in terms of the evaluation    function by changing the strategy from the status quo, provided    that other players do not alter theirs. Originally proposed by    American mathematician John Nash, a Nash equilibrium does not    necessarily mean that the present state is globally optimum. It    could actually be a miserable trap. The human species would be    better off if nuclear weapons were abolished, but it is    difficult to achieve universal nuclear disarmament    simultaneously. From game theoretic point of view, it does not    make sense for a country like the U.K. to abandon its nuclear    arsenal, while other nations keep weapons of mass destruction.  
    Moloch caused by AI is like MAD in the nuclear arms race, in    that a particular evaluation function unreasonably dominates.    In the attention economy craze on the social media, everyone    would be better off if people just stopped optimizing for    algorithms. However, if you quit, someone else would just    occupy your niche, taking away the revenue. Therefore you keep    doing it, remaining a hopeful monster to someday become a Mr.    Beast. Thus, Moloch reigns through people's spontaneous    submission to the Nash equilibrium, dictated by an evaluation    function.  
    So how do we escape from the dystopia of Moloch? Is the    jailbreak even possible?  
    Goodhart's law is a piece of wisdom we may adapt to escape the    pitfall of Moloch. The adage, often stated as "when a    measure becomes a target, it ceases to be a good measure", was    due to Charles Goodhart, a British economist.    Originally a sophisticated argument about how to handle    monetary policy, Goodhart's law has resonance with a wide range    of aspects in our daily lives. Simply put, following an    evaluation function can sometimes be bad.  
    For example, it would be great to have a lot of money as a    result of satisfying and rewarding life habits, but it would be    a terrible mistake to try to make as much money as possible no    matter what. Excellent academic performance as a fruit of    curiosity driven investigations is great: Aiming at high grades    at school for their own sake could stifle a child. It is one of    life's ultimate blessings to fall in love with someone special:    It would be stupid to count how many lovers you had. That is    why the Catalogue Aria sung by Don Giovanni's servant Leporello    is at best a superficial caricature of what human life is all    about, although musically profoundly beautiful, coming from the    genius of Mozart.  
    AI in general learns in an optimization process of some    assigned evaluation function towards a goal. As a consequence,    AI is most useful when the set goal makes sense. Moloch happens    when the goal is ill-posed or too rigid.  
    ___  
    The terrible truth about Moloch is that it is mediocre, never    original, reflecting its origin of statistically optimized    evaluation functions  
    ___  
    Economist John Maynard Keynes once said the economy is driven    by the animal spirits. The wonderful insight then is that as    animals, we can always opt out of the optimization game. In    order to escape the pitfall of Moloch, we need to become a    black belt in applied Goodhart's law. When a measure becomes a    target, it ceases to be a good measure. We can always update    the evaluation function, or use a portfolio of different value    systems simultaneously. A dystopia like the one depicted in    George Orwell's Nineteen Eighty-Four is the result of    taking a particular evaluation function too seriously. When a    measure becomes a target, there would be a dystopia, and Moloch    would reign. All work and no play makes Jack a dull boy. Trying    to satisfy the dictates of the status quo only leads to    uninteresting results. We don't have to pull the plug on AI. We    can just ignore it. AI does not have this insight, but we    humans do. At least some of us.  
    Being aware of Goodhart's law, we would be well advised to keep    an educated distance from the suffocating workings of the    evaluation functions in AI. The human brain allocates resources    through the attentional system in the prefrontal cortex. If    your attention is too focused on a particular evaluation    function, your life would become rigid and narrow, encouraging    Moloch. You should make more flexible and balanced uses of    attention to things that really matter to you.  
    When watching YouTube or TikTok, rather than viewing videos and    clips suggested by the algorithm and fall victim to the    attention economy, you may opt to do an inner search. What are    the things that come up to your mind when you look back on your    childhood, for example? Are there things that tickles your    interest from recent experiences in your life? If there are,    search for them on the social media. You cannot entirely beat    the algorithms, as the search results are formed by them, but    you would have initiated a new path of investigation from your    inner insights. Practicing mindfulness and making flexible uses    of attention on your own interests and wants would be the best    medicine against the symptoms of Moloch, because it makes your    life's effective evaluation functions more broad and flexible.    By making clever uses of your attention, you can improve your    own life, and make the attention economy turn for the better,    even if by a small step.  
    Flexible and balanced attention control would lead to more    unique creativity, which would be highly valuable in an era    marked by tsunamis of contents generated by AI. It is great to    use ChatGPT, as long as you remember it is only a tool.    Students might get along well by mastering prompt engineering    to write academic essays. However, sentences generated by AI    tend to be bland, if good enough to earn grades. Alternatively,    you can write a prose entirely on your own, like I have been    doing with this discussion of Moloch. What you write could be    interesting only when you sometimes surprise the reader with    twists and turns away from the norm, a quality currently    lacking in generative AI.  
    The terrible truth about Moloch is that it is mediocre,    never original, reflecting its origin of statistically    optimized evaluation functions. Despite the advent of    AI, the problem remains human, all too human. Algorithms do not    have direct access to the inner workings of our brains.    Attention is the only outlet of brain's computations. In order    to pull this through, we need to be focused on the best in us,    paying attention to nice things. If we learn to appreciate the    truly beautiful, and distinguish genuine desires from    superficial ones induced by social media, the spectre of Moloch    would recede to our peripheral vision.  
    The wisdom is to keep being human, by making flexible, broad,    and focused uses of the brain's attentional network. In    choosing our focuses of attention, we are exercising our free    will, in defiance of Moloch. Indeed, the new era of artificial    intelligence could yet prove to be a new renaissance, with full    blown blossoming of the potentials of humans, if only we knew    what to attend to. As the 2017 paper by Google researchers    which initiated the transformer revolution eventually leading    to ChatGPT was famously titled, attention is all you    need.  
Here is the original post: 
AI, Moloch, and the race to the bottom - IAI