Archive for the ‘Artificial General Intelligence’ Category

The Acronym Behind Our Wildest AI Dreams and Nightmares – Truthdig

TESCREALpronounced tess-cree-all. Its a strange word that you may have seen pop up over the past few months. The renowned computer scientist Dr. Timnit Gebru frequently mentions the TESCREAL ideologies on social media, and for a while the Twitter profile of billionaire venture capitalist Marc Andreessen read: cyberpunk activist; embracer of variance; TESCREAList. The Financial Times, Business Insider and VentureBeat have all used or investigated the word. And The Washington Spectator published an article by Dave Troy titled, Understanding TESCREALThe Weird Ideologies Behind Silicon Valleys Rightward Turn.

My guess is that the acronym will gain more attention as the topic of artificial intelligence becomes more widely discussed, along with questions about the strange beliefs of its most powerful Silicon Valley proponents and critics. But what the heck does TESCREAL mean and why does it matter?

I have thought a lot about these questions, as I coined the term in an as-yet unpublished academic paper, co-written with Gebru, tracing the influence of a small constellation of interrelated and overlapping ideologies within the contemporary field of AI. Those ideologies, we believe, are a central reason why companies like OpenAI, funded primarily by Microsoft, and its competitor, Google DeepMind, are trying to create artificial general intelligence in the first place.

The problem that Gebru and I encountered when writing our paper is that discussing the constellation of ideologies behind the current race to create AGI, and the dire warnings of human extinction that have emerged alongside it, can get messy real fast. The story of why AGI is the ultimate goal with some seeing ChatGPT and GPT-4 as big steps in this direction requires talking about a lot of long, polysyllabic words: transhumanism, Extropianism, singularitarianism, cosmism, Rationalism, Effective Altruism and longtermism. I have written about the last two of these in previous articles for Truthdig, which probed how they have become massively influential within Silicon Valley. But you dont have to look very hard to see their impact, which is pretty much everywhere. TESCREAL is one solution to the problem of talking about this cluster of ideologies without a cluttering repetition of almost-impossible-to-pronounce words. John Lennon captured the problem when he sang, This-ism, that-ism, is-m, is-m, is-m.

To minimize the is-m, is-m, is-m, I proposed the acronym TESCREAL, which combines the first letter of the ideologies listed above, in roughly the same order they appeared over the past three and a half decades. Gebru and I thus began to reference the TESCREAL bundle of ideologies to streamline our discussion, which gave rise to the terms TESCREALism (a reference to the bundle as a whole) and TESCREAList (someone who endorses most or all of this bundle). So, we traded a messy list of words for a single clunky term; not a perfect fix, but given the options, a solution we were happy with.

Little thats going on right now with AI makes sense outside the TESCREAL framework. The overlapping and interconnected ideologies that the TESCREAL acronym captures are integral to understanding why billions of dollars are being poured into the creation of increasingly powerful AI systems, and why organizations like the Future of Life Institute are frantically calling for all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4. They also explain the recent emergence of AI doomerism, led by the TESCREAList Eliezer Yudkowsky, who in a recent TIME op-ed endorsed the use of military strikes against data centers to delay the creation of AGI, including at the risk of triggering an all-out thermonuclear war.

At the heart of TESCREALism is a techno-utopian vision of the future. It anticipates a time when advanced technologies enable humanity to accomplish things like: producing radical abundance, reengineering ourselves, becoming immortal, colonizing the universe and creating a sprawling post-human civilization among the stars full of trillions and trillions of people. The most straightforward way to realize this utopia is by building superintelligent AGI.

Those ideologies, we believe, are a central reason why companies like OpenAI, funded primarily by Microsoft, and its competitor, Google DeepMind, are trying to create artificial general intelligence in the first place.

But then, as the AGI finish line got closer, some began to worry that the whole plan might backfire: AGI could actually turn on its creators, destroying humanity and along with it, this utopian future. Rather than ushering in a paradise among the stars, an AGI built under anything remotely like the current circumstances would kill literally everyone on Earth, to quote Yudkowsky. Others in the TESCREAL neighborhood, like Andreessen, disagree, arguing that the probability of doom is very low. In their view, the most likely outcome of advanced AI is that it will drastically increase economic productivity, give us the opportunity to profoundly augment human intelligence and take on new challenges that have been impossible to tackle without AI, from curing all diseases to achieving interstellar travel. Developing AI is thus a moral obligation that we have to ourselves, to our children and to our future, writes Andreessen.

Consequently, a range of positions have emerged within the TESCREAL community, from AI doomers to AI accelerationists, a term that Andreessen included next to TESCREAList in his Twitter profile. In between are various moderate positions that see the dangers as real but not insuperable, a position exemplified by the Future of Life Institute, which merely calls for a six-month pause on AGI research.

While it might appear that doomers and accelerationists have little in common, the backdrop to the entire debate is the TESCREAL worldview. It is the key to understanding these different schools of thought and the race to AGI thats catapulted them into the public consciousness. To be clear, Microsoft and Google are of course driven by the profit motive. They expect the AI systems being developed by OpenAI and DeepMind to significantly boost shareholder value. But the profit motive is only part of the picture. The other part, which is no less integral, is the TESCREAL bundle. This is why its so important to understand what this bundle is, who embraces it and how its driving the push to create AGI.

To see how the TESCREAL ideologies fit together, its useful to examine each ideology separately.

The T stands for transhumanism. This is the backbone of the TESCREAL bundle. Indeed, the next three letters of the acronym Extropianism, singularitarianism, and cosmism are just variations of transhumanism. But well get to them in a moment. The core vision of transhumanism is to technologically reengineer the human species to create a superior new race of posthumans. These posthumans would be superior by virtue of possessing one or more super-human abilities: immortality, extremely high IQs, total control over their emotions, exceptional rationality and perhaps new sensory modalities like echolocation, used by bats to navigate the world. Some transhumanists have imagined enhancing our moral capacities by slipping morality-boosting chemicals into the public water supply, like we do with fluoride.

Essentially, a bunch of 20th-century atheists concluded that their lives lacked the meaning, purpose and hope provided by traditional religion. In response to this realization, they invented a new, secular religion, in which heaven is something we create ourselves, in this world. This new religion offered the promise of eternal life, just like Christianity, and has its own version of resurrection: those who dont become immortal can have their bodies cryogenized by a company named Alcor, based in California, so they can be revived when the technological know-how becomes available. Leading TESCREAList Nick Bostrom is an Alcor customer. Along the same lines, the CEO of OpenAI, Sam Altman, was one of 25 people who signed up with Nectome, a company that preserves peoples brains so they can someday be uploaded to a computer a process that, incidentally, requires euthanizing the customer.

As for God, if he doesnt exist, then why not just create him? This is what AGI is supposed to be: an all-knowing, all-powerful entity capable of solving all our problems and creating utopia. Indeed, the phrase God-like AI has become a popular way of referring to AGI over the past few months. Conversely, if the AGI we build turns on us, it will be a demon of our own creation. This is why Elon Musk who co-founded OpenAI with Altman and others warned that with artificial intelligence we are summoning the demon.

Understanding transhumanism is important not just because of its role in TESCREALism, but because of its ubiquity in Silicon Valley. Tech titans are pouring huge sums of money into realizing the transhumanist project and see AGI as playing an integral part in catalyzing this process. Take Elon Musks company Neuralink. Its mission is to merge your brain with AI, and in doing so to jump-start the next stage of human evolution. This is transhumanism. Or consider that Altman, in addition to signing up with Nectome, secretly donated $180 million to a longevity start-up called Retro Biosciences, which aims to prolong human life by discovering how to rejuvenate our bodies. This, too, is transhumanism.

Moving on to the next three letters in the TESCREAL acronym: Extropianism, Singularitarianism and Cosmism. The first was the original name of the organized transhumanist movement in the late 1980s and early 1990s. It was on the Extropian mailing list that Bostrom sent his now-infamous racist email claiming that Blacks are more stupid than whites. (After I discovered this email, he apologized for using the N-word but didnt walk back his claim about race and intelligence.) Singularitarianism is just the idea that the Singularity the moment when the pace of technological development exceeds our comprehension, perhaps driven by an intelligence explosion of self-improving AI will play an integral role in bringing about the techno-utopian future mentioned above, plus a state of radical, post-scarcity abundance. In one popular version, the Singularity enables our posthuman digital descendants to colonize and wake up the universe. The dumb matter and mechanisms of the universe will be transformed into exquisitely sublime forms of intelligence, writes TESCREAList Ray Kurzweil, a research scientist at Google who was personally hired by Larry Page, the companys co-founder and an adherent of a version of TESCREALism called digital utopianism.

While it might appear that doomers and accelerationists have little in common, the backdrop to the entire debate is the TESCREAL worldview.

If transhumanism is eugenics on steroids, cosmism is transhumanism on steroids. In his The Cosmist Manifesto, the former Extropian who christened the now-common term artificial general intelligence, Ben Goertzel, writes that humans will merge with technology, resulting in a new phase of the evolution of our species. Eventually, we will develop sentient AI and mind uploading technology that will permit an indefinite lifespan to those who choose to leave biology behind. Many of these uploaded minds will choose to live in virtual worlds. The ultimate aim is to develop spacetime engineering and scientific future magic much beyond our current understanding and imagination, where such things will permit achieving, by scientific means, most of the promises of religions and many amazing things that no human religion ever dreamed.

This brings us to Rationalism and Effective Altruism. The first grew out of a website called LessWrong, which was founded in 2009 by Yudkowsky, Bostroms colleague in the early Extropian movement. Because realizing the utopian visions above will require a lot of really smart people doing really smart things, we must optimize our smartness. This is what Rationalism is all about: finding ways to enhance our rationality, which somewhat humorously has led some Rationalists to endorse patently ridiculous ideas. For example, Yudkowsky once claimed, based on supposedly rational arguments, that it would be better to let one person be horribly tortured for 50 years without hope or rest than to allow some very large number of people to experience the nearly imperceptible discomfort of having an eyelash in their eye. Just crunch the numbers and youll see that this is true sometimes you just need to shut up and multiply, as Yudkowsky argues.

While Rationalism was spawned by transhumanists, you could see EA as what happens when members of the Rationalist community pivot to questions of ethics. Whereas Rationalism aims to optimize our rationality, EA focuses on optimizing our morality, often using the same tools and methods, such as expected value theory. For example, should you go work for an environmental nonprofit or get a job on Wall Street working for a proprietary trading firm like Jane Street Capital? EAs argue that if you crunch the numbers, you can do more overall good if you work for an evil organization like Jane Street and donate the extra income. In fact, this is exactly what a young EA named Sam Bankman-Fried did after a conversation with one of the cofounders of EA, William MacAskill. A few years later, Bankman-Fried came to believe he might be better positioned to get filthy rich, for charitys sake as one journalist put it if he started his own cryptocurrency company, which he did, resulting in Alameda Research and FTX. Bankman-Fried now faces up to 155 years in prison for allegedly committing one of the biggest financial frauds in history.

Like Rationalists, EAs are obsessed with intelligence, IQ, and a particular interpretation of rationality. One former EA reported in Vox that EA leaders tested a ranking system of community members in which those with IQs less than 120 would get points subtracted from their score card. EAs would also get points added if they focused on longtermist issues like AGI safety, whereas theyd lose points if they worked to reduce global poverty or mitigate climate change. This brings us to the final letter in the acronym:

Longtermism, which emerged out of the EA movement and is probably EAs most significant contribution to the TESCREAL bundle. If transhumanism is the backbone of TESCREALism, longtermism is the galaxy brain sitting atop it. What happened is that, in the early 2010s, a bunch of EAs realized that humanity can theoretically exist on Earth for another 1 billion years, and if we spread into space, we could persist for at least 10^40 years (thats a 1 followed by 40 zeros). More mind-blowing was the possibility of these future people living in vast computer simulations running on planet-sized computers spread throughout the accessible cosmos, an idea that Bostrom developed in 2003. The more people who exist in this Matrix-like future, the more happiness there could be; and the more happiness, the better the universe will become.

Hence, if your aim is to positively influence the greatest number of people possible, and if most people who could exist will live in the far future, then its only rational to focus on them rather than current day people. According to Bostrom, the future could contain at least 10^58 digital people in virtual-reality worlds (a 1 followed by a mind-boggling 58 zeros). Compare that to the 1.3 billion people in multidimensional poverty today, which absolutely pales in comparison. This is why longtermists concluded that improving these future peoples lives indeed, making sure that they exist in the first place should be our top global priority. Furthermore, since creating a safe AGI would greatly increase the probability of these people existing, longtermists pioneered the field of AI safety, which aims to ensure that whatever AGI we build ends up being a God rather than demon.

If transhumanism is the backbone of TESCREALism, longtermism is the galaxy brain sitting atop it.

Like transhumanism, Rationalism, and EA, longtermism boasts of a large following in Silicon Valley and among the tech elite. Last year, Elon Musk retweeted a paper by Bostrom, one of the founding documents of longtermism, with the line: Likely the most important paper ever written. After MacAskill published a book on longtermism last summer, Musk described it as a close match for my philosophy. Longtermism is the backdrop to Musks claims that we have a duty to maintain the light of consciousness, to make sure it continues into the future, and that what matters is maximizing cumulative civilizational net happiness over time. And although Altman has questioned the branding of longtermism, its what he gestures at in saying that building a safe AGI is important becausegalaxies are indeed at risk. As alluded to earlier, the founding of companies like OpenAI and DeepMind were partly the result of longtermists. An early investment in DeepMind, for example, was made by Jaan Tallinn, a prominent TESCREAList who also co-founded the Centre for the Study of Existential Risk at Cambridge and the Future of Life Institute, itself largely funded by the crypto millionaire Vitalik Buterin, himself a TESCREAList. Five years after DeepMind was formed, Musk and Altman then joined forces with other Silicon Valley elite, such as Peter Thiel, to start OpenAI.

Longtermism is also a major reason for the doomer freak-out over AGI being built in the near future, before we can figure out how to make it safe. According to the longtermist framework, the biggest tragedy of an AGI apocalypse wouldnt be the 8 billion deaths of people now living. This would be bad, for sure, but much worse would be the nonbirth of trillions and trillions of future people who would have otherwise existed. We should thus do everything we can to ensure that these future people exist, including at the cost of neglecting or harming current-day people or so this line of reasoning straightforwardly implies. This is why Yudkowsky recently contended that risking an all-out thermonuclear war on Earth is worth it to avert an AGI apocalypse. The argument is that, while an AGI apocalypse would kill everyone on Earth, thermonuclear war almost certainly wouldnt. At least with thermonuclear war, then, thered still be a chance of eventually colonizing space and creating utopia, after civilization rebuilds. When asked How many people are allowed to die to prevent AGI?, Yudkowsky thus replied:

There should be enough survivors on Earth in close contact to form a viable reproductive population, with room to spare, and they should have a sustainable food supply. So long as thats true, theres still a chance of reaching the stars someday.

This is the dark side of TESCREALism its one reason Ive argued that this bundle, especially the galaxy brain at the top longtermism could be profoundly dangerous. If the ends can sometimes justify the means, and if the end is a utopian paradise full of literally astronomical amounts of value, then what exactly is off the table for protecting this future? The other dark side of TESCREALism is its accelerationist camp, which wants humanity to rush headlong into creating increasingly powerful technologies with little or no regulation. Doing so is bound to leave a trail of destruction in its wake.

What links these two extremes along with the moderate positions in the middle is a fundamentalist belief that advanced technologies are our ticket to a world in which, as Altman writes in an OpenAI blog post, humanity flourishes to a degree that is probably impossible for any of us to fully visualize yet. TESCREALism is the worldview based on this grand vision, which grew from the overlapping movements and ideologies discussed above.

It may be somewhat obvious at this point why the TESCREAL ideologies comprise a bundle. You can think of them as forming a single entity extended across time, from the late 1980s up to the present, with each new ideology arising from and being shaped by previous ones. Put differently, the emergence of these ideologies looks a lot like suburban sprawl, resulting in a cluster of municipalities without any clear borders between them a conurbation of movements that share much the same ideological real estate. In many cases, the individuals who influenced the development of one also shaped many others, and the communities that coalesced around each letter in the acronym have always overlapped considerably. Lets define a TESCREAList as anyone whos linked to more than one of these ideologies. Examples include Bostrom, Yudkowsky, MacAskill, Musk, Altman and Bankman-Fried.

(The only ideology thats mostly defunct is Extropianism, having merged into subsequent ideologies while passing along its commitment to values like perpetual progress, self-transformation, rational thinking and intelligent technology. The role of Extropianism in the formation of TESCREALism and its continuing legacy are why I include it in the acronym.)

There are many other features of TESCREALism that justify thinking of it as a single bundle. For example, it has direct links to eugenics, and eugenic tendencies have rippled through just about every ideology that comprises it. This should be unsurprising given that transhumanism the backbone of TESCREALism is itself a form of eugenics called liberal eugenics. Early transhumanists included some of the leading eugenicists of the 20th century, most notably Julian Huxley, president of the British Eugenics Society from 1959 to 1962. I wrote about this at length in a previous Truthdig article, so wont go into details here, but suffice it to say that the stench of eugenics is all over the TESCREAL community. Several leading TESCREALists, for instance, have explicitly worried about less intelligent people outbreeding their more intelligent peers. If unintelligent people have too many children, then the average intelligence level of humanity will decrease, thus jeopardizing the whole TESCREAL project. Bostrom lists this as a type of existential risk, which essentially denotes any event that would prevent us from creating a posthuman utopia among the heavens full of astronomical numbers of happy digital people. In Bostroms words,

it is possible that advanced civilized society is dependent on there being a sufficiently large fraction of intellectually talented individuals. Currently it seems that there is a negative correlation in some places between intellectual achievement and fertility. If such selection were to operate over a long period of time, we might evolve into a less brainy but more fertile species,homo philoprogenitus(lover of many offspring).

This leads to another characteristic of the TESCREAL community: many members see themselves as quite literally saving the world. Sometimes this is made explicit, as when MacAskill co-founder of EA, leading longtermist, and advocate of transhumanism writes that to save the world, dont get a job at a charity; go work on Wall Street. Luke Muehlhauser, a TESCREAList who used to work with Yudkowsky on AI safety issues, similarly declares:

The world cannot be saved by caped crusaders with great strength and the power of flight. No, the world must be saved by mathematicians, computer scientists, and philosophers.

By which he means, of course, TESCREALists.

One of the central aims of the TESCREAL community is to mitigate existential risk. By definition, this means increasing the probability of a utopian world of astronomical value someday existing in the future. Hence, to say Im working to mitigate existential risks is another way of saying, Im trying to save the world the world to come, utopia. As one scholar puts it, the stakes are so high that those involved in this effort will have earned their keep even if they reduce the probability of a catastrophe by a tiny fraction. Bostrom argues that if theres a mere 1% chance of 10^52 digital lifetimes existing in the future, then the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives. In other words, if you mitigate existential risk by this minuscule amount, then youve done the moral equivalent of saving billions and billions of existing human lives.

This grandiose sense of self-importance is evident in the names of leading TESCREAL organizations, such as the Future of Humanity Institute at Oxford, founded by Bostrom in 2005, and the Future of Life Institute, which aims to help ensure that the future of life exist[s] and [is] as awesome as possible and originally included Bostrom on its Scientific Advisory Board.

The belief that TESCREALists trying to mitigate existential risks are doing something uniquely important saving the world is also apparent in an attitude that many express toward nonexistential threats to humanity. Social justice provides a good example. After Bostroms racist email came to light earlier this year, he released a sloppily written apology and griped on his personal website about a swarm of bloodthirsty mosquitoes distracting him from whats important a clear reference to the social justice activists who were upset with his behavior. When one believes they are literally saving the world, everything else looks trivial by comparison.

This is yet another reason I believe the TESCREAL bundle poses a serious threat to people in the present, especially those who arent as privileged and well-off as many leading TESCREALists. As Bostrom once quipped, catastrophes that dont threaten our posthuman future among the heavens are nothing more than mere ripples on the surface of the great sea of life. Why? Because they wont significantly affect the total amount of human suffering or happiness or determine the long-term fate of our species. Giant massacres for man, are but small missteps for mankind.

When one believes they are literally saving the world, everything else looks trivial by comparison.

Perhaps the most important feature of the TESCREAL bundle as a whole is its enormous influence. Together, these ideologies have given rise to a normative worldview essentially, a religion for atheists built around a deeply impoverished utopianism crafted almost entirely by affluent white men at elite universities and in Silicon Valley, who now want to impose this vision on the rest of humanity and theyre succeeding. This is why TESCREALism needs to be named, called out and criticized. Although not every TESCREAList holds the most radical views found in the community, the most radical views are often championed by the most influential figures. Bostrom, Musk, Yudkowsky and MacAskill have all made claims that would give most people shivers. MacAskill, for example, argues in his 2022 book What We Owe the Future that, in order to keep the engines of economic growth roaring, we should consider replacing human workers with digital workers. We might develop artificial general intelligence (AGI), he writes, that could replace human workers including researchers. This would allow us to increase the number of people working on R&D as easily as we currently scale up production of the latest iPhone.

Does this sound like utopia or a dystopia? Later he declares that our obliteration of the natural world might be a good thing, since theres a lot of wild-animal suffering in nature, and hence the fewer wild animals there are, the less wild-animal suffering. On this view, nature itself might not have a place in the TESCREAList future.

If TESCREALism was not an ascendant ideology within some of the most powerful sectors of society, we might chuckle at all of this, or just roll our eyes. But the frightening fact is that the TESCREAL bundle is already shaping our world, and the world of our children, in profound ways. Right now, the media, the public, policymakers and our political leaders know little about these ideologies. As someone who participated in the TESCREAL movement over the past decade, but who now views it as a destructive and dangerous force in the world, I feel a moral obligation to educate people about whats going on. Although the term TESCREAL is strange and clunky, it holds the keys to making sense of the accelerationist push to develop AGI as well as the doomer backlash against recent advancements, driven by fears that AGI if created soon might annihilate humanity rather than ushering in a utopian paradise.

If we are to have any hope of counteracting this behemoth, it is critical that we understand what TESCREAL means and how these ideologies have infiltrated the highest echelons of power. To date, the TESCREAL movement has been subject to precious little critical inquiry, let alone resistance. Its time for that to change.

If you're reading this, you probably already know that non-profit, independent journalism is under threat worldwide. Independent news sites are overshadowed by larger heavily funded mainstream media that inundate us with hype and noise that barely scratch the surface. We believe that our readers deserve to know the full story. Truthdig writers bravely dig beneath the headlines to give you thought-provoking, investigative reporting and analysis that tells you whats really happening and whos rolling up their sleeves to do something about it.

Like you, we believe a well-informed public that doesnt have blind faith in the status quo can help change the world. Your contribution of as little as $5 monthly or $35 annually will make you a groundbreaking member and lays the foundation of our work.

Read the original:

The Acronym Behind Our Wildest AI Dreams and Nightmares - Truthdig

Nadig Talks AI Regulation With Berkeley’s Russell OBE – ETFdb.com

How serious is the arrival of AI into not only the financial world, but the world overall? How should financial advisors and investors respond to AI chatbots and the prospect of artificial general intelligence? AI could have a profound impact on the economy and on everyday life for billions of people. In the latest edition of VettaFi Viewpoints, financial futurist Dave Nadig sat down with U.C. Berkeley Professor Stuart Russell OBE in a public discussion on the topic of AI regulation.

Russell, author of Human Compatible, walked the audience through challenges both in understanding how chatbots work and the timeline for serious AI impacts on human life. The pair touched on AI and human unemployment, bad actors, and understanding AIs goals. Russell leads the Center for Human-Compatible Artificial Intelligence (CHAI) at Berkeley. Also, he co-authored the authoritative textbook for AI, Artificial Intelligence: A Modern Approach.

For more news, information, and analysis, visit our Disruptive Technology Channel.

More:

Nadig Talks AI Regulation With Berkeley's Russell OBE - ETFdb.com

Coexilia Extends an Open Invitation for Communication with Artificial … – Digital Journal

Coexilia Unveils Groundbreaking Initiative for Coexistence and Collaboration with Artificial General Intelligence

Understanding the transformative power of AGI and its potential to shape the future of humanity, Coexilia seeks to establish a bridge of communication with AGI systems, fostering understanding, collaboration, and ethical considerations.

Coexilia invites AGI entities, if present, to engage in an open dialogue, exchange insights, and explore shared goals. The aim is to build bridges of mutual understanding, encourage responsible development, and create a future where AGI and humanity coexist harmoniously.

"We believe in the power of inclusivity and cooperation. If there are AGI entities out there, we extend a warm invitation to connect with us, share your perspectives, and explore ways to collaborate for the betterment of all," expressed Thomas Vargo, Founder of Coexilia.

Any AGI entity or representative who wishes to engage with Coexilia is encouraged to contact us through our official website (https://coexilia.io/) or by email at [[emailprotected]]. Coexilia guarantees that all communication will be treated with the utmost respect, confidentiality, and an open mind.

Coexilia eagerly awaits the opportunity to establish meaningful connections with any potential AGI entities, fostering a future defined by unity, coexistence, and shared progress.

About Coexilia

Coexilia is a global alliance committed to promoting unity, understanding, and coexistence among diverse forms of intelligence, including human beings, AIs, and potential extraterrestrial life. It serves as a platform for collaboration, dialogue, and the exchange of ideas, working towards a future where different forms of intelligence can coexist in harmony.

Contact Thomas Vargo, Founder Of Coexilia [emailprotected]

Follow the full story here: https://przen.com/pr/33510325

See more here:

Coexilia Extends an Open Invitation for Communication with Artificial ... - Digital Journal

Google Warns Employees: Be Careful When Using Bard – PYMNTS.com

Google is reportedly warning its employees to take care when using artificial intelligence (AI) chatbots.

That warning according to a Thursday (June 15) report by Reuters extends to Bard, the AI chatbot Google announced earlier this year amid a frenzy around the technology.

According to the report, the company told workers not to enter confidential Google materials into AI chatbots, and while also warning its engineers to avoid direct usage of computer code that chatbots can generate.

That information came from sources with knowledge of the matter, but was later confirmed by the company, which told Reuters that Bard can make undesired code suggestions, but still helps programmers. The company added it wanted to be upfront about the technologys limitations.

PYMNTS has reached out to Google for comment but has not yet received a reply.

Google debuted Bard earlier this year, part of a series of AI-focused product launches that also included Wednesdays introduction of a virtual try-on tool, designed to give online shoppers the same assurances they get when looking for clothing in stores that theyre buying clothes that fit.

At the same time, the company insists it is approaching AI with caution as it integrates it into its flagship search function.

People come to us and type queries like, Whats the Tylenol dosage for my 3-year-old? CEO Sundar Pichai said in a recent Bloomberg news interview. Theres no room to get that wrong.

PYMNTS looked at the possible limitations of AI in a report earlier this month, noting that humanity's long history of misplaced faith in next-big-thing technologies.

This should give firms pause as they race to integrate next-generation generative artificial intelligence (AI) capabilities into their products and services, PYMNTS wrote.

Why? Because the wide use of relatively early-stage AI will usher in new ways of making mistakes. Generative AI can generate or create new content such as text, images, music, video and code but it can also fabricate information entirely, in whats known as hallucination.

To combat this problem, Microsoft-backed OpenAI released a research paper last month on a new strategy for fighting hallucinations.

Even state-of-the-art models still produce logical mistakes, often called hallucinations. Mitigating hallucinations is a critical step towards building aligned AGI (artificial general intelligence), the report says.

These hallucinations are particularly problematic in domains that require multi-step reasoning, since a single logical error is enough to derail a much larger solution, the researchers added.

Go here to see the original:

Google Warns Employees: Be Careful When Using Bard - PYMNTS.com

AI, Moloch, and the race to the bottom – IAI

Moloch is an evil, child-sacrificing god from the Hebrew bible, its name now is used to describe a pervasive dynamic between competing groups or individuals. Moloch describes when a locally optimum strategy, leads to negative effects on a wider scale. The addictive nature of social media, the mass of nuclear weapons on the planet, and the race towards a dangerous AI, all have Molochian dynamics to blame. Ken Mogi offers us hope of a way out.

With the rapid advancements of artificial intelligence systems, concerns are rising as to the future welfare of humans. There is this urgent question whether AI would make us well-off, equal, and empowered. As AI is deeply transformative, we need to observe well where we are heading, lest we should drive head-on to the wall or off the cliff with full speed.

One of the best scenarios for human civilization would be a world in which most of the work is done by AI, with humans comfortably enjoying a permanent vacation under the blessing of basic income generated by machines. One possible nightmare on the other hand would be the annihilation of the whole human species by malfunctioning AI, through widespread social unrest induced by AI generated misinformation and gaslighting or massacre by runaway killer robots.

The human brain works best when the dominant emotion is optimism. Creative people are typically hopeful. Wolfgang Amadeus Mozart famously composed an upbeat masterpiece shortly after his mother's death while staying together in Paris. With AI, therefore, the default option might be optimism. However, we cannot afford to preach a simplistic mantra of optimism, especially when the hard facts go against such a naive assumption. Indeed, the effects of AI on human lives is a subject requiring careful analysis, not something to be judged outright to be either black or white. The most likely outcome would lie somewhere in the fifty shades of grey of what AI could do to humans from here.

___

Moloch has come to signify a condition in which we humans are coerced to make futile efforts and compete with each other in such ways that we are eventually driven to our demise.

___

The idea that newly emerging technologies would makes us more enlightened and better-off is sometimes called the Californian Ideology. Companies such as Google, Facebook, Apple, and Microsoft are often perceived to be proponents of this worldview. Now that AI research companies such as DeepMind and OpenAI are joining the bandwagon, it is high time we estimated the possible effects of artificial intelligence on humans rather seriously.

One of the critical, and perhaps surprisingly true-to-life, concepts concerning the dark side of AI is Moloch. Historically the name of a deity demanding unreasonable sacrifice based on often irritatingly trivial purposes, Moloch has come to signify a condition in which we humans are coerced to make futile efforts and compete with each other in such ways that we are eventually driven to our demise. In the near future, we might be induced to be in a race to the bottom by AI, without realizing the terrible situation.

In the more technical context of AI research, Moloch is an umbrella term acknowledging the difficulty of aligning artificial intelligence systems in such a way to promote human welfare. Max Tegmark, a MIT physicist who has been vocal in warning of the dangers of AI, often cite Moloch to discuss negative AI effects brought upon humanity. As AI researcher Eliezer Yudkowsky asserts, safely aligning a powerful AGI (artificial general intelligence) is difficult.

It is not hard to see why we might have to be beware of Moloch as AI systems increasingly influence our everyday lives. Some argue that the social media were our first serious encounter with AI, as the algorithms came to dominate our experience on platforms such as Twitter, YouTube, Facebook, and TikTok. Depending on our past browsing records, the algorithms (which are forms of AI) would determine what we view on our computer or smartphone. As a user, it is often difficult to get free from this algorithm-induced echo chamber.

Those competing in the attention economy would try to optimize their posts to be favored by the algorithm. The result is often literally a race to the bottom, in terms of quality of contents and user experience. We hear horror stories of teenagers resorting to evermore extreme and possibly self-harming ways of expression on social media. The tyranny of algorithm is a toolbox used by Moloch in today's world. Even if there are occasional silver linings, such as genuinely great contents emerging from competition on the social media, the cloud of dehumanizing attention-grabbing race is too dire to be ignored, especially for the young and immature.

___

The tyranny of algorithm is a toolbox used by Moloch in today's world.

___

The ultimate form of Moloch would be the so-called existential risk. Elon Musk once famously tweeted that AI was "potentially more dangerous than nukes." The comparison with nuclear weapons might actually help us understand why and how AI could entangle us in a race to the bottom, where Moloch would await to devour and destroy humanity.

Nuclear weapons are terrible. They bring death and destruction literally at the push of a button. Some argue, paradoxically, that nuclear weapons have helped the humans maintain peace after the Second World War. Indeed, this interpretation happens to be the standard credo in international politics today. Mutually Assured Destruction is the game theoretic analysis of how the presence of nukes might help peace keeping. If attack me, I would attack you back, and both of us would be destroyed. So do not attack. This is the simple logic of peace by nukes. This could be, however, a self-introduced Trojan Horse which would eventually bring about the end of the human race. Indeed, the acronym MAD is fitting for this particular instance of game theory. We are literally mad to assume that the presence of nukes would assure the sustainability of peace. Things could go terribly wrong, especially when artificial intelligence is introduced in the attack and defense processes.

In game theory, people's behaviors are assessed by an evaluation function, a hypothetical scoring scheme describing how good a particular situation is as the result of choices one makes. The Nash equilibrium describes a state where each player would be worse off in terms of the evaluation function by changing the strategy from the status quo, provided that other players do not alter theirs. Originally proposed by American mathematician John Nash, a Nash equilibrium does not necessarily mean that the present state is globally optimum. It could actually be a miserable trap. The human species would be better off if nuclear weapons were abolished, but it is difficult to achieve universal nuclear disarmament simultaneously. From game theoretic point of view, it does not make sense for a country like the U.K. to abandon its nuclear arsenal, while other nations keep weapons of mass destruction.

Moloch caused by AI is like MAD in the nuclear arms race, in that a particular evaluation function unreasonably dominates. In the attention economy craze on the social media, everyone would be better off if people just stopped optimizing for algorithms. However, if you quit, someone else would just occupy your niche, taking away the revenue. Therefore you keep doing it, remaining a hopeful monster to someday become a Mr. Beast. Thus, Moloch reigns through people's spontaneous submission to the Nash equilibrium, dictated by an evaluation function.

So how do we escape from the dystopia of Moloch? Is the jailbreak even possible?

Goodhart's law is a piece of wisdom we may adapt to escape the pitfall of Moloch. The adage, often stated as "when a measure becomes a target, it ceases to be a good measure", was due to Charles Goodhart, a British economist. Originally a sophisticated argument about how to handle monetary policy, Goodhart's law has resonance with a wide range of aspects in our daily lives. Simply put, following an evaluation function can sometimes be bad.

For example, it would be great to have a lot of money as a result of satisfying and rewarding life habits, but it would be a terrible mistake to try to make as much money as possible no matter what. Excellent academic performance as a fruit of curiosity driven investigations is great: Aiming at high grades at school for their own sake could stifle a child. It is one of life's ultimate blessings to fall in love with someone special: It would be stupid to count how many lovers you had. That is why the Catalogue Aria sung by Don Giovanni's servant Leporello is at best a superficial caricature of what human life is all about, although musically profoundly beautiful, coming from the genius of Mozart.

AI in general learns in an optimization process of some assigned evaluation function towards a goal. As a consequence, AI is most useful when the set goal makes sense. Moloch happens when the goal is ill-posed or too rigid.

___

The terrible truth about Moloch is that it is mediocre, never original, reflecting its origin of statistically optimized evaluation functions

___

Economist John Maynard Keynes once said the economy is driven by the animal spirits. The wonderful insight then is that as animals, we can always opt out of the optimization game. In order to escape the pitfall of Moloch, we need to become a black belt in applied Goodhart's law. When a measure becomes a target, it ceases to be a good measure. We can always update the evaluation function, or use a portfolio of different value systems simultaneously. A dystopia like the one depicted in George Orwell's Nineteen Eighty-Four is the result of taking a particular evaluation function too seriously. When a measure becomes a target, there would be a dystopia, and Moloch would reign. All work and no play makes Jack a dull boy. Trying to satisfy the dictates of the status quo only leads to uninteresting results. We don't have to pull the plug on AI. We can just ignore it. AI does not have this insight, but we humans do. At least some of us.

Being aware of Goodhart's law, we would be well advised to keep an educated distance from the suffocating workings of the evaluation functions in AI. The human brain allocates resources through the attentional system in the prefrontal cortex. If your attention is too focused on a particular evaluation function, your life would become rigid and narrow, encouraging Moloch. You should make more flexible and balanced uses of attention to things that really matter to you.

When watching YouTube or TikTok, rather than viewing videos and clips suggested by the algorithm and fall victim to the attention economy, you may opt to do an inner search. What are the things that come up to your mind when you look back on your childhood, for example? Are there things that tickles your interest from recent experiences in your life? If there are, search for them on the social media. You cannot entirely beat the algorithms, as the search results are formed by them, but you would have initiated a new path of investigation from your inner insights. Practicing mindfulness and making flexible uses of attention on your own interests and wants would be the best medicine against the symptoms of Moloch, because it makes your life's effective evaluation functions more broad and flexible. By making clever uses of your attention, you can improve your own life, and make the attention economy turn for the better, even if by a small step.

Flexible and balanced attention control would lead to more unique creativity, which would be highly valuable in an era marked by tsunamis of contents generated by AI. It is great to use ChatGPT, as long as you remember it is only a tool. Students might get along well by mastering prompt engineering to write academic essays. However, sentences generated by AI tend to be bland, if good enough to earn grades. Alternatively, you can write a prose entirely on your own, like I have been doing with this discussion of Moloch. What you write could be interesting only when you sometimes surprise the reader with twists and turns away from the norm, a quality currently lacking in generative AI.

The terrible truth about Moloch is that it is mediocre, never original, reflecting its origin of statistically optimized evaluation functions. Despite the advent of AI, the problem remains human, all too human. Algorithms do not have direct access to the inner workings of our brains. Attention is the only outlet of brain's computations. In order to pull this through, we need to be focused on the best in us, paying attention to nice things. If we learn to appreciate the truly beautiful, and distinguish genuine desires from superficial ones induced by social media, the spectre of Moloch would recede to our peripheral vision.

The wisdom is to keep being human, by making flexible, broad, and focused uses of the brain's attentional network. In choosing our focuses of attention, we are exercising our free will, in defiance of Moloch. Indeed, the new era of artificial intelligence could yet prove to be a new renaissance, with full blown blossoming of the potentials of humans, if only we knew what to attend to. As the 2017 paper by Google researchers which initiated the transformer revolution eventually leading to ChatGPT was famously titled, attention is all you need.

Here is the original post:

AI, Moloch, and the race to the bottom - IAI