TESCREALpronounced    tess-cree-all. Its a strange word that you may have seen pop    up over the past few months. The renowned computer scientist    Dr. Timnit Gebru frequently mentions    the TESCREAL ideologies on social media, and for a while the    Twitter profile of billionaire venture capitalist Marc    Andreessen read:    cyberpunk activist; embracer of variance; TESCREAList. The    Financial    Times, Business    Insider and VentureBeat have all used or    investigated the word. And The Washington Spectator published    an    article by Dave Troy titled, Understanding TESCREALThe    Weird Ideologies Behind Silicon Valleys Rightward    Turn.  
    My    guess is that the acronym will gain more attention as the topic    of artificial intelligence becomes more widely discussed, along    with questions about the strange beliefs of its most powerful    Silicon Valley proponents and critics. But what the heck does    TESCREAL mean and why does it matter?  
    I    have thought a lot about these questions, as I coined the term    in an as-yet unpublished academic paper, co-written with Gebru,    tracing the influence of a small constellation of interrelated    and overlapping ideologies within the contemporary field of AI.    Those ideologies, we believe, are a central reason why    companies like OpenAI, funded primarily by Microsoft, and its    competitor, Google DeepMind, are trying to create artificial    general intelligence in the first place.  
    The    problem that Gebru and I encountered when writing our paper is    that discussing the constellation of ideologies behind the    current race to create AGI, and the dire    warnings of human extinction that have emerged alongside    it, can get messy real fast. The story of why AGI is the    ultimate goal with some seeing ChatGPT and GPT-4 as big steps    in this direction  requires talking about a lot of long,    polysyllabic words: transhumanism, Extropianism,    singularitarianism, cosmism, Rationalism, Effective Altruism and    longtermism. I have written about the last    two of these in previous articles for Truthdig, which probed    how they have become massively influential within Silicon    Valley. But you dont have to look very hard to see their    impact, which is pretty much everywhere. TESCREAL is one    solution to the problem of talking about this cluster of    ideologies without a cluttering repetition of    almost-impossible-to-pronounce words. John Lennon captured the    problem when he sang, This-ism,    that-ism, is-m, is-m, is-m.  
    To    minimize the is-m, is-m, is-m, I proposed the acronym    TESCREAL, which combines the first letter of the ideologies    listed above, in roughly the same order they appeared over the    past three and a half decades. Gebru and I thus began to    reference the TESCREAL bundle of ideologies to streamline our    discussion, which gave rise to the terms TESCREALism (a    reference to the bundle as a whole) and TESCREAList (someone    who endorses most or all of this bundle). So, we traded a messy    list of words for a single clunky term; not a perfect fix, but    given the options, a solution we were happy with.  
    Little    thats going on right now with AI makes sense outside the    TESCREAL framework. The overlapping and interconnected    ideologies that the TESCREAL acronym captures are integral to    understanding why billions of dollars are being poured into the    creation of increasingly powerful AI systems, and why    organizations like the Future of Life Institute are frantically    calling    for all AI labs to immediately pause for at least six months    the training of AI systems more powerful than GPT-4. They also    explain the recent emergence of AI doomerism, led by the    TESCREAList Eliezer Yudkowsky, who in a recent TIME op-ed    endorsed    the use of military strikes against data centers to delay the    creation of AGI, including at the risk of triggering an all-out    thermonuclear war.  
    At    the heart of TESCREALism is a techno-utopian vision of the    future. It anticipates a time when advanced technologies enable    humanity to accomplish things like: producing radical    abundance, reengineering ourselves,    becoming immortal, colonizing the universe    and creating a sprawling post-human civilization among the    stars full of trillions    and trillions of people. The most straightforward way to    realize this utopia is by building superintelligent AGI.  
                Those ideologies, we believe, are a central reason why        companies like OpenAI, funded primarily by Microsoft, and        its competitor, Google DeepMind, are trying to create        artificial general intelligence in the first        place.      
    But    then, as the AGI finish line got closer, some began to worry    that the whole plan might backfire: AGI could actually turn on    its creators, destroying humanity and along with it, this    utopian future. Rather than ushering in a paradise among the    stars, an AGI built under anything remotely like the current    circumstances would kill literally everyone on Earth, to    quote    Yudkowsky. Others in the TESCREAL neighborhood, like    Andreessen, disagree, arguing that    the probability of doom is very low. In their view, the most    likely outcome of advanced AI is that it will drastically    increase economic productivity, give us the opportunity to    profoundly augment human intelligence and take on    new challenges that have been impossible to tackle without AI,    from curing all diseases to achieving interstellar travel.    Developing AI is thus a moral obligation that we have to    ourselves, to our children and to our future, writes    Andreessen.  
    Consequently,    a range of positions have emerged within the TESCREAL    community, from AI doomers to AI accelerationists, a term    that Andreessen included    next to TESCREAList in his Twitter profile. In between are    various moderate positions that see the dangers as real but    not insuperable, a position exemplified by the Future of Life    Institute, which merely calls for a six-month pause on AGI    research.  
    While    it might appear that doomers and accelerationists have    little in common, the backdrop to the entire debate is the    TESCREAL worldview. It is the key to understanding these    different schools of thought and the race to AGI thats    catapulted them into the public consciousness. To be clear,    Microsoft and Google are of course driven by the profit motive.    They expect the AI systems being developed by OpenAI and    DeepMind to significantly boost    shareholder value. But the profit motive is only part of the    picture. The other part, which is no less integral, is the    TESCREAL bundle. This is why its so important to understand    what this bundle is, who embraces it and how its driving the    push to create AGI.  
    To    see how the TESCREAL ideologies fit together, its useful to    examine each ideology separately.  
    The    T stands for transhumanism. This is the backbone of    the TESCREAL bundle. Indeed, the next three letters of the    acronym  Extropianism, singularitarianism, and cosmism  are    just variations of transhumanism. But well get to them in a    moment. The core vision of transhumanism is to technologically    reengineer the human species to create a superior new race of    posthumans.    These posthumans would be superior by virtue of possessing    one or more super-human    abilities: immortality, extremely high IQs, total control    over their emotions, exceptional rationality and perhaps new    sensory modalities like echolocation, used by bats    to navigate the world. Some transhumanists have imagined    enhancing our moral capacities by slipping    morality-boosting chemicals into the public water supply,    like we do with fluoride.  
    Essentially,    a bunch of 20th-century atheists concluded that    their lives lacked the meaning, purpose and hope provided by    traditional religion. In response to this realization, they    invented a new, secular religion, in which heaven is    something we create ourselves, in this    world. This new religion offered the promise of eternal    life, just like Christianity, and has its own version of    resurrection: those who dont become immortal can have their    bodies cryogenized by a company named Alcor, based in    California, so they can be revived when the technological    know-how becomes available. Leading TESCREAList Nick Bostrom    is an Alcor    customer. Along the same lines, the CEO of OpenAI, Sam    Altman, was one of 25 people who signed    up with Nectome, a company that preserves peoples brains    so they can someday be uploaded to a computer  a process    that, incidentally, requires euthanizing the customer.  
    As    for God, if he doesnt exist, then why not just create him?    This is what AGI is supposed to be: an all-knowing,    all-powerful entity capable of solving all our problems and    creating utopia. Indeed, the phrase God-like    AI has become a popular way of referring to AGI over the    past few months. Conversely, if the AGI we build turns on us,    it will be a demon of our own creation. This is why Elon Musk     who co-founded OpenAI with Altman and others  warned    that with artificial intelligence we are summoning the    demon.  
    Understanding    transhumanism is important not just because of its role in    TESCREALism, but because of its ubiquity in Silicon Valley.    Tech titans are pouring huge sums of money into realizing the    transhumanist project and see AGI as playing an integral part    in catalyzing this process. Take Elon Musks company Neuralink.    Its mission is to merge    your brain with AI, and in doing so to jump-start the next stage of    human evolution. This is transhumanism. Or consider that    Altman, in addition to signing up with Nectome, secretly    donated $180 million to a longevity start-up called Retro    Biosciences, which aims to prolong human life by discovering    how to rejuvenate our bodies. This, too, is    transhumanism.  
    Moving    on to the next three letters in the TESCREAL acronym:    Extropianism, Singularitarianism and Cosmism. The first was the    original name of the organized transhumanist movement in the    late 1980s and    early 1990s. It was on the Extropian mailing list that    Bostrom sent his now-infamous racist email    claiming that Blacks are more stupid than whites. (After I    discovered this email, he apologized for using the    N-word but didnt walk back his claim about race and    intelligence.) Singularitarianism is just the idea that the    Singularity  the moment    when the pace of technological development exceeds our    comprehension, perhaps driven by an intelligence explosion of    self-improving AI  will play an integral role in bringing    about the techno-utopian future mentioned above, plus a state    of radical, post-scarcity abundance. In one popular version,    the Singularity enables our posthuman digital descendants to    colonize and wake up the universe. The dumb matter and    mechanisms of the universe will be transformed into exquisitely    sublime forms of intelligence, writes    TESCREAList Ray Kurzweil, a research scientist at Google who    was personally hired by Larry Page, the companys co-founder    and an adherent of a version of TESCREALism called digital    utopianism.  
                While it might appear that doomers and accelerationists        have little in common, the backdrop to the entire debate is        the TESCREAL worldview.      
    If    transhumanism is eugenics on steroids, cosmism is    transhumanism on steroids. In his The Cosmist Manifesto, the    former Extropian who christened the now-common    term artificial general intelligence, Ben Goertzel, writes that    humans will merge with technology, resulting in a new phase    of the evolution of our species. Eventually, we will develop    sentient AI and mind uploading technology that will permit an    indefinite lifespan to those who choose to leave biology    behind. Many of these uploaded minds will choose to live in    virtual worlds. The ultimate aim is to develop spacetime    engineering and scientific future magic much beyond our    current understanding and imagination, where such things will    permit achieving, by scientific means, most of the promises of    religions  and many amazing things that no human religion ever    dreamed.  
    This    brings us to Rationalism and Effective Altruism. The first grew    out of a website called LessWrong, which was founded    in 2009 by Yudkowsky, Bostroms colleague in the early    Extropian movement. Because realizing the utopian visions above    will require a lot of really smart people doing really    smart things, we must optimize our smartness. This is what    Rationalism is all about: finding ways to enhance our    rationality, which  somewhat humorously  has led some    Rationalists to endorse patently ridiculous ideas. For example,    Yudkowsky once claimed,    based on supposedly rational arguments, that it would be    better to let one person be horribly tortured for 50 years    without hope or rest than to allow some very large number of    people to experience the nearly imperceptible discomfort of    having an eyelash in their eye. Just crunch the numbers and    youll see that this is true  sometimes you just need to    shut up and    multiply, as Yudkowsky argues.  
    While Rationalism was spawned by transhumanists, you could see    EA as what happens when members of the Rationalist community    pivot to questions of ethics. Whereas Rationalism aims to    optimize our rationality, EA focuses on optimizing our    morality, often using the same tools and methods, such    as expected    value theory. For example, should you go work for an    environmental nonprofit or get a job on Wall Street working for    a proprietary trading firm like Jane Street Capital? EAs    argue    that if you crunch the numbers, you can do more overall good    if you work for an evil organization like Jane Street and    donate the extra income. In fact, this is    exactly what a young EA named Sam Bankman-Fried did after a    conversation with one of the cofounders of EA, William    MacAskill. A few years later, Bankman-Fried came to believe he    might be better positioned to get filthy rich, for charitys    sake  as one journalist put it     if he started his own cryptocurrency company, which he did,    resulting in Alameda Research and FTX. Bankman-Fried now faces    up to 155 years in prison for allegedly committing one of the    biggest financial frauds in history.  
    Like Rationalists, EAs are obsessed with intelligence, IQ,    and a particular interpretation of rationality. One former EA    reported    in Vox that EA leaders tested a ranking system of community    members in which those with IQs less than 120 would get points    subtracted from their score card. EAs would also get points    added if they focused on longtermist issues like AGI safety,    whereas theyd lose points if they worked to reduce global    poverty or mitigate climate change. This brings us to the    final letter in the acronym:  
    Longtermism, which emerged out of the EA movement and is    probably EAs most significant contribution to the TESCREAL    bundle. If transhumanism is the backbone of TESCREALism,    longtermism is the galaxy brain sitting atop it. What    happened is that, in the early 2010s, a bunch of EAs realized    that humanity can theoretically exist on Earth for another 1    billion years, and if we spread into space, we could persist    for at least 10^40 years (thats a 1 followed by 40 zeros).    More mind-blowing was the possibility of these future people    living in vast computer simulations running on planet-sized    computers spread throughout the accessible cosmos, an idea that    Bostrom developed in 2003. The more    people who exist in this Matrix-like future, the more    happiness there could be; and the more happiness, the better    the universe will become.  
    Hence, if your aim is to positively influence the greatest    number of people possible, and if most people who could exist    will live in the far future, then its only rational to focus    on them rather than current day people. According to    Bostrom, the future could    contain at least 10^58 digital people in virtual-reality    worlds (a 1 followed by a mind-boggling 58 zeros). Compare that    to the 1.3 billion people in multidimensional poverty    today, which absolutely pales in comparison. This is why    longtermists concluded that improving these future peoples    lives  indeed, making sure that they exist in the first    place  should be our top global priority. Furthermore,    since creating a safe AGI would greatly increase the    probability of these people existing, longtermists pioneered    the field of AI safety, which aims to ensure that whatever    AGI we build ends up being a God rather than demon.  
        If transhumanism is the backbone of TESCREALism,        longtermism is the galaxy brain sitting atop it.      
    Like transhumanism, Rationalism, and EA, longtermism boasts of    a large following in Silicon Valley and among the tech elite.    Last year, Elon Musk retweeted a paper by    Bostrom, one of the founding documents of longtermism, with the    line: Likely the most important paper ever written. After    MacAskill published a book on longtermism last summer, Musk    described it as a close    match for my philosophy. Longtermism is the backdrop to Musks    claims    that we have a duty to maintain the light of consciousness, to    make sure it continues into the future, and that what    matters  is maximizing cumulative civilizational net    happiness over time. And although Altman has questioned the    branding of longtermism, its what he gestures at in saying    that building a safe AGI is important becausegalaxies are    indeed at risk. As alluded to earlier, the founding of    companies like OpenAI and DeepMind were partly the result of    longtermists. An early investment in DeepMind, for example,    was made    by Jaan Tallinn, a prominent TESCREAList who also co-founded    the Centre for the Study of Existential Risk at Cambridge and    the Future of Life Institute, itself largely funded by the    crypto millionaire Vitalik Buterin, himself a TESCREAList. Five    years after DeepMind was formed, Musk and Altman then joined    forces with other Silicon Valley elite, such as Peter Thiel, to start    OpenAI.  
    Longtermism is also a major reason for the doomer freak-out    over AGI being built in the near future, before we can figure    out how to make it safe. According to the longtermist    framework, the biggest tragedy of an AGI apocalypse    wouldnt be the 8 billion deaths of people now living.    This would be bad, for sure, but much    worse would be the nonbirth of trillions and trillions    of future people who would have otherwise existed. We should    thus do everything we can to ensure that these future people    exist, including at the cost of neglecting or harming    current-day people  or so this line of reasoning    straightforwardly implies. This is why Yudkowsky recently    contended that risking an    all-out thermonuclear war on Earth is worth it to    avert an AGI apocalypse. The argument is that, while an AGI    apocalypse would kill everyone on Earth, thermonuclear war    almost certainly wouldnt. At least with thermonuclear war,    then, thered still be a chance of eventually colonizing space    and creating utopia, after civilization rebuilds. When asked    How many people are allowed to die to prevent AGI?, Yudkowsky    thus replied:  
      There should be enough survivors on Earth in close contact to      form a viable reproductive population, with room to spare,      and they should have a sustainable food supply. So long as      thats true, theres still a chance of reaching the stars      someday.    
    This is the dark side of TESCREALism  its one reason Ive    argued    that this bundle, especially the galaxy brain at the top     longtermism  could be profoundly dangerous. If the ends can    sometimes justify the means, and if the end is a utopian    paradise full of literally astronomical amounts of value,    then what exactly    is off the table for protecting this future? The other dark    side of TESCREALism is its accelerationist camp, which    wants humanity to rush headlong into creating increasingly    powerful technologies with little or no regulation. Doing so is    bound to leave a trail of destruction in its wake.  
    What links these two extremes  along with the moderate    positions in the middle  is a fundamentalist belief that    advanced technologies are our ticket to a world in which, as    Altman writes in an    OpenAI blog post, humanity flourishes to a degree that is    probably impossible for any of us to fully visualize yet.    TESCREALism is the worldview based on this grand vision, which    grew from the overlapping movements and ideologies discussed    above.  
    It may be somewhat obvious at this point why the TESCREAL    ideologies comprise a bundle. You can think of them as    forming a single entity extended across time, from the late    1980s up to the present, with each new ideology arising from    and being shaped by previous ones. Put differently, the    emergence of these ideologies looks a lot like suburban sprawl,    resulting in a cluster of municipalities without any clear    borders between them  a conurbation of movements that share    much the same ideological real estate. In many cases, the    individuals who influenced the development of one also shaped    many others, and the communities that coalesced around each    letter in the acronym have always overlapped considerably.    Lets define a TESCREAList as anyone whos linked to more    than one of these ideologies. Examples include Bostrom,    Yudkowsky, MacAskill, Musk, Altman and Bankman-Fried.  
    (The only ideology thats mostly defunct is Extropianism,    having merged into subsequent ideologies while passing along    its commitment to values    like perpetual progress, self-transformation, rational    thinking and intelligent technology. The role of    Extropianism in the formation of TESCREALism and its continuing    legacy are why I include it in the acronym.)  
    There are many other features of TESCREALism that justify    thinking of it as a single bundle. For example, it has direct links to    eugenics, and eugenic tendencies have rippled through just    about every ideology that comprises it. This should be    unsurprising given that transhumanism  the backbone of    TESCREALism  is itself a form of eugenics called liberal    eugenics. Early transhumanists included some of the leading    eugenicists of the 20th century, most notably Julian Huxley,    president of the British Eugenics Society from 1959 to 1962. I    wrote about this at length in a previous Truthdig article, so    wont go into details here, but suffice it to say that the    stench of eugenics is all over the TESCREAL community. Several    leading TESCREALists, for instance, have explicitly worried about    less intelligent people outbreeding their more intelligent    peers. If unintelligent people have too many children, then    the average intelligence level of humanity will decrease,    thus jeopardizing the whole TESCREAL project. Bostrom lists    this as a type of existential risk, which essentially denotes    any event that would prevent us from creating a posthuman    utopia among the heavens full of astronomical numbers of    happy digital people. In Bostroms words,  
      it is possible that advanced civilized society is dependent      on there being a sufficiently large fraction of      intellectually talented individuals. Currently it seems that      there is a negative correlation in some places between      intellectual achievement and fertility. If such selection      were to operate over a long period of time, we might evolve      into a less brainy but more fertile species,homo      philoprogenitus(lover of many offspring).    
    This leads to another characteristic of the TESCREAL community:    many members see themselves as quite literally saving the    world. Sometimes this is made explicit, as when MacAskill     co-founder of EA, leading longtermist, and advocate of    transhumanism  writes    that to save the world, dont get a job at a charity; go work    on Wall Street. Luke Muehlhauser, a TESCREAList who used to    work with Yudkowsky on AI safety issues, similarly declares:  
      The world cannot be saved by caped crusaders with great      strength and the power of flight. No, the world must be saved      by mathematicians, computer scientists, and philosophers.    
    By which he means, of course, TESCREALists.  
    One of the central aims of the TESCREAL community is to    mitigate existential risk. By definition, this means increasing    the probability of a utopian world of astronomical value    someday existing in the future. Hence, to say Im working to    mitigate existential risks is another way of saying, Im    trying to save the world  the world to come, utopia. As one    scholar puts it, the stakes are so high that those involved in    this effort will have earned their keep even if they reduce the    probability of a catastrophe by a tiny fraction. Bostrom    argues    that if theres a mere 1% chance of 10^52 digital lifetimes    existing in the future, then the expected value of reducing    existential risk by a mere one billionth of one billionth    of one percentage point is worth a hundred billion times    as much as a billion human lives. In other words, if you    mitigate existential risk by this minuscule amount, then youve    done the moral equivalent of saving billions and billions of    existing human lives.  
    This grandiose sense of self-importance is evident in the names    of leading TESCREAL organizations, such as the Future of    Humanity Institute at Oxford, founded by Bostrom in 2005, and    the Future of Life Institute, which aims to    help ensure that the future of life exist[s] and [is] as    awesome as possible and originally    included Bostrom on its Scientific Advisory Board.  
    The belief that TESCREALists trying to mitigate existential    risks are doing something uniquely important  saving the world     is also apparent in an attitude that many express toward    nonexistential threats to humanity. Social justice provides a    good example. After Bostroms racist email came to light    earlier this year, he released a sloppily written apology and griped    on his personal website about a swarm of bloodthirsty    mosquitoes distracting him from whats important  a clear    reference to the social justice activists who were upset with    his behavior. When one believes they are literally saving the    world, everything else looks trivial by comparison.  
    This is yet another reason I believe the TESCREAL bundle poses    a serious threat to people in the present, especially those who    arent as privileged and well-off as many leading TESCREALists.    As Bostrom once quipped, catastrophes    that dont threaten our posthuman future among the heavens are    nothing more than mere ripples on the surface of the great sea    of life. Why? Because they wont significantly affect the    total amount of human suffering or happiness or determine the    long-term fate of our species. Giant massacres for man,    are but small missteps    for mankind.  
        When one believes they are literally saving the world,        everything else looks trivial by comparison.      
    Perhaps the most important feature of the TESCREAL bundle as a    whole is its enormous influence. Together, these ideologies    have given rise to a normative worldview  essentially, a    religion for atheists  built around a deeply impoverished    utopianism crafted almost entirely by affluent white men at    elite universities and in Silicon Valley, who now want to    impose this vision on the rest of humanity  and theyre    succeeding. This is why TESCREALism needs to be named, called    out and criticized. Although not every TESCREAList holds the    most radical views found in the community, the most radical    views are often championed by the most influential figures.    Bostrom, Musk, Yudkowsky and MacAskill have all made claims    that would give most people shivers. MacAskill, for example,    argues in his 2022 book What We Owe the Future that, in order    to keep the engines of economic growth roaring, we should    consider replacing human workers with digital workers. We    might develop artificial general intelligence (AGI), he    writes,    that could replace human workers  including researchers. This    would allow us to increase the number of people working on    R&D as easily as we currently scale up production of the    latest iPhone.  
    Does this sound like utopia or a dystopia? Later he declares    that our obliteration of the natural world might be a good    thing, since theres a lot of wild-animal suffering in nature,    and hence the fewer wild animals there are, the less    wild-animal suffering. On this view, nature itself might not    have a place in the TESCREAList future.  
    If TESCREALism was not an ascendant ideology within some of the    most powerful sectors of society, we might chuckle at all of    this, or just roll our eyes. But the frightening fact is that    the TESCREAL bundle is already shaping our world, and the world    of our children, in profound ways. Right now, the media, the    public, policymakers and our political leaders know little    about these ideologies. As someone who participated in the    TESCREAL movement over the past decade, but who now views it as    a destructive and dangerous force in the world, I feel a moral    obligation to educate people about whats going on. Although    the term TESCREAL is strange and clunky, it holds the keys to    making sense of the accelerationist push to develop AGI as    well as the doomer backlash against recent advancements,    driven by fears that AGI  if created soon  might annihilate    humanity rather than ushering in a utopian paradise.  
    If we are to have any hope of counteracting this behemoth, it    is critical that we understand what TESCREAL means and how    these ideologies have infiltrated the highest echelons of    power. To date, the TESCREAL movement has been subject to    precious little critical inquiry, let alone resistance. Its    time for that to change.  
    If you're reading this, you probably already know that    non-profit, independent journalism is under threat worldwide.    Independent news sites are overshadowed by larger heavily    funded mainstream media that inundate us with hype and noise    that barely scratch the surface. We believe that our readers    deserve to know the full story. Truthdig writers bravely dig    beneath the headlines to give you thought-provoking,    investigative reporting and analysis that tells you whats    really happening and whos rolling up their sleeves to do    something about it.  
    Like you, we believe a well-informed public that doesnt have    blind faith in the status quo can help change the world. Your    contribution of as little as $5 monthly or $35 annually will    make you a groundbreaking member and lays the foundation of our    work.  
Read the original: 
The Acronym Behind Our Wildest AI Dreams and Nightmares - Truthdig