Archive for the ‘Artificial General Intelligence’ Category

Why Hawaii Should Take The Lead On Regulating Artificial … – Honolulu Civil Beat

A new state office of AI Safety and Regulation could take a risk-based approach to regulating various AI products.

Not a day passes without a major news headline on the great strides being made on artificial intelligence and warnings from industry insiders, academics and activists about the potentially very serious risks from AI.

A 2023survey of AI expertsfound that 36% fear that AI development may result in a nuclear-level catastrophe. Almost 28,000 people have signed on to anopen letterwritten by the Future of Life Institute, including Steve Wozniak, Elon Musk, the CEOs of several AI companies and many other prominent technologists, asking for a six-month pause or a moratorium on new advanced AI development.

As a public policy lawyer and also a researcher in consciousness (I have a part-time position at UC Santa Barbaras META Lab I share these strong concerns about the rapid development of AI, and I am a co-signer of the Future of Life open letter.

Why are we all so concerned? In short: AI development is going way too fast and its not being regulated.

The key issue is the profoundly rapid improvement in the new crop of advanced chatbots, or what are technically called large language models such as ChatGPT, Bard, Claude 2, and many others coming down the pike.

The pace of improvement in these AIs is truly impressive. This rapidaccelerationpromises to soon result in artificial general intelligence, which is defined as AI that is as good or better on almost anything a human can do.

When AGI arrives, possibly in the near future but possibly in a decade or more, AI will be able toimproveitself with no human intervention. It will do this in the same way that, for example, GooglesAlphaZeroAI learned in 2017 how to play chess better than even the very best human or other AI chess players in just nine hours from when it was first turned on. It achieved this feat by playing itself millions of times over.

In testing GPT-4, it performed better than90%of human test takers on the Uniform Bar Exam, a standardized test used to certify lawyers for practice in many states. That figure was up from just 10% in the previous GPT-3.5 version, which was trained on a smaller data set. They found similar improvements in dozens of other standardized tests.

Most of these tests are tests of reasoning, not of regurgitated knowledge. Reasoning is perhaps the hallmark of general intelligence so even todays AIs are showing significant signs of general intelligence.

This pace of change is why AI researcher Geoffrey Hinton, formerly with Google for a number of years,toldtheNew York Times: Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. Thats scary.

In a mid-May Senate hearing on the potential of AI, Sam Altman, the head of OpenAI called regulation crucial. But Congress has done almost nothing on AI since then and the White House recently issued a letter applauding a purely voluntary approach adopted by the major AI development companies like Google and OpenAI.

A voluntary approach on regulating AI safety is like asking oil companies to voluntarily ensure their products keep us safe from climate change.

With the AI explosion underway now, and with artificial general intelligence perhaps very close, we may have just onechanceto get it right in terms of regulating AI to ensure its safe.

Im working with Hawaii state legislators to create a new Office of AI Safety and Regulation because the threat is so immediate that it requires significant and rapid action. Congress is working on AI safety issues but it seems that Congress is simply incapable of acting rapidly enough given the scale of this threat.

The new office would follow the precautionary principle in placing the burden on AI developers to demonstrate that their products are safe for Hawaii before they are allowed to be used in Hawaii. The current approach by regulators is to allow AI companies to simply release their products to the public, where theyre being adopted at record speed, with literally no proof of safety.

We cant afford to wait for Congress to act.

The new Hawaii office of AI Safety and Regulation would then take a risk-based approach to regulating various AI products. This means that the office staff, with public input, would assess the potential dangers of each AI product type and would impose regulations based on the potential risk. So less risky products would be subject to lighter regulation and more risky AI products would face more burdensome regulation.

My hope is that this approach will help to keep Hawaii safe from the more extreme dangers posed by AI which another recent open letter, signed by hundreds of AI industry leaders and academics, warned should be considered as dangerous as nuclear war or pandemics.

Hawaii can and should lead the way on a state-level approach to regulating these dangers. We cant afford to wait for Congress to act and it is all but certain that anything Congress adopts will be far too little and too late.

Sign Up

Sorry. That's an invalid e-mail.

Thanks! We'll send you a confirmation e-mail shortly.

Read the rest here:

Why Hawaii Should Take The Lead On Regulating Artificial ... - Honolulu Civil Beat

Artificial Intelligence (AI) Explained in Simple Terms – MUO – MakeUseOf

Artificial intelligence is all the rage nowadays, with its huge potential causing a stir in almost every industry. But fully understanding this complex technology can be tricky, especially if you're not well-versed in tech topics. So, let's break down artificial intelligence into its most simple terms. How does this technology work, and how is it being used today?

You may think of humanoid robots and super intelligent computers when the term "artificial intelligence" comes to mind. But today, that's not what this technology represents.

Artificial intelligence (AI) is a branch of computer science aiming to build machines capable of mimicking human intelligence. It involves creating algorithms that allow computers to learn from and make decisions or predictions based on data rather than following only explicitly programmed instructions.

Machine learning (ML), a subset of AI, involves systems that can "learn" from data. These algorithms improve their performance as the number of datasets they learn from increases. Deep learning, a further subset of machine learning, uses artificial neural networks to make decisions and predictions. It is designed to mimic how a human brain learns and makes decisions.

Natural language processing (NLP) is another important aspect of AI, dealing with the interaction between computers and humans using natural language. The ability of machines to understand, generate, and respond to human language is crucial for many AI applications, like virtual assistants and AI chatbots (more of which we'll discuss in a moment).

Artificial intelligence can be classified into two main types: narrow AI, which is designed to perform a narrow task (such as facial recognition or internet searches), and artificial general intelligence (AGI), which is an AI system with generalized human cognitive abilities so that it can outperform humans at most economically valuable work. AGI is sometimes referred to as strong AI.

However, despite many advancements, AI still does not possess the full spectrum of human cognitive abilities, and we are still far from achieving true artificial general intelligence. The current AI technologies are task-specific and cannot understand context outside of their specific programming.

Artificial intelligence is like teaching computers to learn just like humans. They do this by looking at lots of data or examples and then using that to make decisions or predictions.

Imagine you are learning to ride a bike. After falling a few times, you start to understand how to balance and pedal at the same time. That's how machine learning, a part of AI, works. It looks at a lot of data and then learns patterns from it. Another part of AI, called natural language processing, is similar to teaching computers to understand and speak human language.

But even with all this, computers still can't fully think or understand like humans, but this is likely to change in the future.

AI has potential and applications that stretch far beyond the tech realm alone.

Even if you're not big into tech, you've probably heard the name "ChatGPT" a few times. ChatGPT (short for Chat Generative Pre-transformer) is a generative AI chatbot. But this isn't like the chatbots you may have used in the past. ChatGPT uses artificial intelligence to process natural human language to fulfill your requests better.

ChatGPT's capabilities form a long list, including fact-checking, checking spelling and grammar, creating schedules, writing resumes, and even translating languages.

ChatGPT is far from the only generative AI chatbot, with alternatives including HuggingChat, Claude, and Google Bard. These services all differ in certain ways. Some are free, some are paid, some specialize in certain areas, while others are better with general tasks.

Data analysis is a key part of our world, whether in research, healthcare, business, or otherwise. Computers have been analyzing data for many years, but using artificial intelligence can take things to the next level.

AI systems can pick up on trends, patterns, and inconsistencies more effectively than a typical computer (or human, for that matter). For example, an AI system could more distinctly highlight less obvious user habits or preferences for social media platforms, allowing them to show more personalized advertisements.

When designing products, many elements must be considered. The cost of materials, how they're sourced, and how efficiently the product will perform are just a few factors that companies need to keep in mind, and this is where AI can help.

Because AI can learn and discover new things based on the information it is given, it can be used to carve out more cost-effective and sustainable materials and production practices for businesses. For instance, an AI system could list more eco-friendly materials that could be used in a product's battery given a comprehensive data set to work from.

AI-generated art took the world by storm in 2022, with products like DALL-E, NightCafe, and Midjourney hitting the heights of popularity. These nifty tools can take a text-based prompt and generate an art piece based on the request.

For example, if you typed "purple sunset on the moon" into DALL-E, chances are you'd get more than one result. Some art generators also let you pick a style for your generated image, such as vintage, hyperrealistic, or anime.

Some artists have pushed back against AI art generators, as they use pre-existing online art to create prompted pieces. This contributes to the theft of original art, an issue that already spans the web.

It's undoubtedly exciting to think about what AI could do for humanity and the planet in the future. AI is already being used to develop new medicines, highlight more sustainable business practices, and even make our day-to-day lives easier by performing mundane tasks like cooking or cleaning.

However, many think that the future of AI is dark and dystopian. It's no surprise that this is a common assumption, given how sci-fi books and films have created some scary stereotypes around AI and its possible consequences.

AI can indeed be abused or mishandled, but this is true for any technology. We've seen Wi-Fi, VPNs, email, and even flash drives exploited by cybercriminals to spread malware and push scams. But the worry is concentrated on artificial intelligence because of its capabilities.

In January 2023, an individual posted to a hacking forum claiming they had successfully created malware using ChatGPT. It wasn't highly complex or sophisticated malware, but the ability to create malicious code via an AI chatbot got people talking. If less advanced AI is being abused now, what will happen if super-intelligent computers are exploited in the future?

This is a valid question but is also tough to answer. At the moment, there are no AI systems that can think on the same level as a human. Many have predicted what such a machine would look like, but it's all hypothetical. While some think we'll create machines with human-level cognitive abilities in the next decade, others think it will take much longer.

While human-hunting robots may be the theme of many fictional pieces, this may never even come close to happening.

If AI is regulated correctly, its development and use could be controlled to prevent bad actors from getting their hands on highly advanced technology.

There are already a lot of discussions being had in the US and around the world about AI regulation. Some see this as a barrier, while others consider it a necessary precaution.

Licenses, laws, and general rules of thumb can all play a role in keeping AI out of the wrong hands. However, this will need to be done without restricting the development of and access to AI technology too tightly, as this could quickly become counterproductive.

Regardless of whether AI advances far beyond what it is today, it has undoubtedly transformed how computers can function. With this incredible technology, we can achieve some incredible feats, though no one knows what the future holds for humanity and artificial intelligence.

Read more from the original source:

Artificial Intelligence (AI) Explained in Simple Terms - MUO - MakeUseOf

The Pros and Cons of Artificial Intelligence (AI) – Fagen wasanni

When AI first emerged, there was a lot of enthusiasm about its potential to reduce labor-intensive work and increase efficiency. However, as with any technological advancement, AI has its positives and negatives. In recent years, prominent figures like Elon Musk and Sam Altman have expressed concerns about the potential threats AI poses.

Artificial intelligence, or AI, is the field of data science that enables machines to perform tasks that are typically done by humans using their intelligence. It involves developing computer systems or algorithms that analyze data, learn from it, and make decisions or predictions. AI techniques include machine learning, natural language processing, computer vision, and robotics.

There are two primary categories of AI: Narrow AI and AGI (Artificial General Intelligence). Narrow AI is designed to perform specific tasks, such as generating text or serving as voice assistants like Siri and Alexa. These systems excel at their designated tasks but lack general cognitive abilities. AGI, on the other hand, represents a more advanced version of AI that aims to imitate human learning and understanding across various tasks.

One of the main concerns about AI is its potential impact on unemployment. Some believe that AI has the capacity to replace existing jobs, while others argue that it will create new opportunities and enhance existing ones. Technological revolutions have historically led to job displacement, but they have also given rise to new and exciting career paths that require new skill sets. AI can assist and augment human capabilities, creating synergies that open up new possibilities.

While there are concerns about the potential risks of AI, research on AI continues because of its demonstrated potential to assist us in various areas, improving efficiency and solving complex problems. AI has already had a transformative impact in fields like healthcare, finance, transportation, and environmental conservation.

It is crucial to strike a balance between innovation and prudence when it comes to AI. The development of Artificial General Intelligence raises concerns about the potential consequences of machines surpassing human capabilities. However, with proper safeguards and a cautious approach, AI can serve as a transformative force for the betterment of humanity.

In conclusion, AI has its pros and cons. It has the potential to revolutionize industries and improve efficiency, but it also raises concerns about unemployment and the potential risks of advanced AI. It is important to embrace change, cultivate a growth mindset, and continuously learn new skills to thrive in an ever-changing job market.

Continue reading here:

The Pros and Cons of Artificial Intelligence (AI) - Fagen wasanni

Will "godlike AI" kill us all or unlock the secrets of the universe … – Salon

Since the release of ChatGPT last November, apocalyptic warnings that AGI, or artificial general intelligence, could destroy humanity have been all over the news. "AI poses 'risk of extinction,'" says "leaders from OpenAI, Google DeepMind, Anthropic, and other AI labs," the New York Times reported last May. The previous month, TIME magazine published an article by the leading "AI doomer," Eliezer Yudkowsky, who declared that "the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die." Similarly, an AI researcher named Connor Leahy told Christiane Amanpour in an interview for CNN that the prospect of AGI killing off the entire human population was "quite likely."

At the very same time, the prospect of "God-like AI" has also inspired a flurry of utopian proclamations. Tech billionaire Marc Andreessen claims that advanced AI will radically accelerate economic growth and job creation, leading to "heightened material prosperity across the planet." It will also enable us to "profoundly augment human intelligence," cure all diseases and build an interstellar civilization. The CEO of OpenAI, Sam Altman, echoes these promises, arguing that AGI will make space colonization possible, create "unlimited intelligence and energy" and ultimately produce "a world in which humanity flourishes to a degree that is probably impossible for any of us to fully visualize yet."

All of this might seem unprecedented. There are so many dire warnings of imminent extinction in the news right now, sometimes paired with equally wild predictions that a new era of radical abundance lies just around the corner. Surely something big is happening. Yet this isn't the first time that notable scientists and self-described "experts" have announced to the public that we're on the cusp of creating a magical new technology that will either annihilate humanity or usher in a utopian world of unfathomable wonders. We've been here before, and what happened? In every case, the outcome was much less sensational than people were led to believe. Often, the hype turned out to be a giant nothingburger.

To put the frenzied hype around AGI into historical perspective, let's revisit one such episode from the early 20th century. Understanding that history will demonstrate that what we're seeing now is nothing new.

It began with the discovery of radioactivity in 1896 by the French physicist Henri Becquerel. What is radioactivity? Let's start by imagining that you place a chunk of iron in direct sunlight for a few hours and then move it to a dark room. If you touch the iron right after moving it inside, it will feel pretty hot, right? But with each passing minute its temperature will drop, until it returns to room temperature.

This is simple enough: The iron rod absorbed energy from the sun and then re-radiated it in the form of thermal energy, which we experience as heat. Without the sunlight an external source of energy the temperature of the rod will equilibrate to the temperature of its environment.

Now let's imagine a different chunk of metal. We place it in a dark, cool room for several days, only to discover that it's actually radiating energy on its own. That's what Becquerel found: The metal called uranium will give off a slight glow even if it's kept in a dark room with no external source of energy. This glow can't be seen with the naked eye, but if you place the uranium next to a photographic plate, an image of it will appear even if the uranium has been stored in a pitch-black room for weeks at a time. How can it radiate energy without an external source?

Becquerel's observation didn't get much attention at first. That all changed after Marie Curie discovered that radium, another type of metal, also produced energy on its own but in much greater quantities. In fact, you can literally see radium glowing with the naked eye in a dark room. Curie coined the word "radioactivity" to denote this phenomenon, though she had no idea how or why it was happening. A metal that could produce its own internal energy at first seemed like a violation of the laws of physics.

An explanation finally came in 1901 from a pair of physicists, Frederick Soddy and Ernest Rutherford. Their discovery was mind-blowing: Some atoms in the radioactive metal spontaneously turned into atoms of a completely different kind of metal, and each time that happened, a small amount of energy was released. That's how these metals produce energy without an external source: Uranium atoms, one at a time, morph into atoms of a different metal, thorium, through a process called radioactive decay. Atoms of thorium, which is also radioactive, then decay into other types of atoms, including radium, until the entire clump becomes a "stable" that is,non-radioactive form of lead, the heavy metal formerly used in paint and gasoline. That ends the process of radioactive decay, which has produced energy from beginning to end.

What physicists Soddy and Rutherford realized was that nature itself is an alchemist, "transmuting" materials into other types of materials through the spontaneous process of radioactive decay.

In previous centuries, alchemists had tried to convert one type of metal into another, usually lead into gold, with a notable lack of success. What Soddy and Rutherford realized was that nature itself is an alchemist, "transmuting" materials into other types of materials through the spontaneous process of radioactive decay. Indeed, when Soddy realized what was going on, he shouted to his colleague: "Rutherford, this is transmutation!" Rutherford then shot back: "For Mike's sake, Soddy, don't call it transmutation. They'll have our heads off as alchemists." Alchemy had long since lost any respectability among professional scientists, and Rutherford didn't want to jeopardize their careers.

An even more significant discovery happened a year later, in 1902, when Soddy and Rutherford found that the amount of energy produced by radioactive decay was enormous not in "absolute" terms but "relative" to the size of the atoms. As historian Spencer Weart writes, the duo's research "showed that radioactivity released vastly more energy, atom for atom, than any other process known."

Exactly how much energy does radioactive decay produce? The answer is given by Albert Einstein's famous equation E=mc2, first published in a 1905 paper that introduced his "theory of special relativity."

That equation says two important things about the peculiar nature of our universe: First, it states that mass and energy are equivalent. They are "different manifestations of the same thing," as Einstein explained in a 1948 interview. No one at the time believed that mass and energy were clearly different types of phenomena, it was assumed, but Einstein showed that this commonsense intuitive idea was wrong.

Second, the equation states that small amounts of mass are equal to enormous amounts of energy. To calculate the amount of energy contained in some quantity of mass, you first square the "c," which stands for the speed of light (a very large number), and then multiply the resulting number the c2 by the amount of mass in question. The result is the amount of energy you get if that mass is converted into energy. In Einstein's words, the E=mc2 equation shows "that very small amounts of mass may be converted into very large amounts of energy."

This means that atoms contain a colossal storehouse of energy "atomic energy," as it was called at first, although "nuclear energy" is more common today. This atomic energy is what radioactive materials give off when they spontaneously decay: As the atoms of one type of metal transmute into atoms of another, they lose a little bit of mass, and this lost mass is converted into energy. That's how radioactive metals like uranium and radium produce their own internal energy, without any external source.

The implications of this extraordinary discovery were profound. If there were some way to extract, harness or liberate this great reservoir of atomic energy, then tiny amounts of mass could be used to power entire civilizations. Atomic energy could usher in a new era of endless abundance, a post-scarcity world in which the energy available to us would be virtually "inexhaustible." As Soddy declared in a popular book published in 1908,

A race which could transmute matter would have little need to earn its bread by the sweat of its brow. If we can judge from what our engineers accomplish with their comparatively restricted supplies of energy, such a race could transform a desert continent, thaw the frozen poles, and make the whole world one smiling Garden of Eden. Possibly they could explore the outer realms of space, emigrating to more favourable worlds as the superfluous to-day emigrate to more favourable continents.

Elsewhere he claimed that, by releasing the energy stored in atoms, "the future would bear as little relation to the past as the life of a dragonfly does to that of its aquatic prototype," and that "a pint bottle of uranium contained enough energy to drive an ocean liner from London to Sydney and back."

One prominent scientist prophesied that nuclear energy would "almost instantaneously change the face of the world ...the poor will be equal to the rich and there will be no more social problems."

Journalists ate all this up, raving about the transformative potential of atomic energy on the pages of leading newspapers and magazines. "When Rutherford and Soddy pointed out that radioactive forces might be the long-sought source of the sun's own energy," Weart writes, "the press took up the idea with relish. Instead of sustaining future civilization with solar steam boilers, perhaps scientists would create solar energy itself in a bottle!" One of the most prominent scientific voices of his day, Gustave Le Bon, prophesied that "the scientist who finds the means of economically releasing the forces contained in matter will almost instantaneously change the face of the world," adding that "the poor will be equal to the rich and there will be no more social problems."

By the 1920s, most people including many schoolchildren were familiar with the idea that atomic energy would revolutionize society. Some even predicted that controlled transmutation might produce gold as an accidental by-product, which could make people rich while solving all our energy woes. Exemplifying hopes that a Golden Age lay just ahead, Waldemar Kaempffert wrote in a 1934 New York Times article that although we couldn't yet unlock the storehouse of energy in atoms, a method would soon be discovered, and once that happened, "probably one building no larger than a small-town postoffice of our time will contain all the apparatus required to obtain enough atomic energy for the entire United States."

This was the utopian side of the hype around radioactivity. Yet just as sensational were the apocalyptic cries that the very same phenomenon could destroy the world and perhaps even the entire universe. In 1903, two years after discovering transmutation, Soddy described our planetary home as "a storehouse stuffed with explosives, inconceivably more powerful than any we know of, and possibly only awaiting a suitable detonator to cause the earth to revert to chaos." Le Bon worried about a device that, with the push of a button, could "blow up the whole earth." Similarly, in a 1904 book, scientist and historian William Cecil Dampier wrote that

it is conceivable that some means may one day be found for inducing radio-active change in elements which are not normally subject to it. Professor Rutherford has playfully suggested to [me] the disquieting idea that, could a proper detonator be discovered, an explosive wave of atomic disintegration might be started through all matter which would transmute the whole mass of the globe into helium or similar gases.

This is the idea of a planetary chain reaction: a process of contagious radioactivity, whereby the decay of one type of atom triggers the decay of other atoms in its vicinity, until the entire earth has been reduced to a ghostly puff of gas. Human civilization would be obliterated.

Some even linked this possibility with novae observed in the sky sudden bursts of light that dazzle the midnight firmament. What if these novae were actually the remnants of technological civilizations like ours, which had in fact discovered the dreaded "detonator" referenced by Rutherford? What if novae were, as one textbook put it, "brought about perhaps by the 'super-wisdom' [i.e., the technological capabilities] of the unlucky inhabitants themselves?"

This was not a fringe idea. Frdric Joliot-Curie, the son-in-law of Marie Curie, even mentioned it in his Nobel Prize speech, delivered in 1935 after he and his wife, Irne, discovered a way to cause radioactive decay to occur in otherwise non-radioactive materials, a phenomenon known as artificial radioactivity. "If such transmutations do succeed in spreading in matter," Joliot-Curiedeclared to his Nobel audience,

the enormous liberation of usable energy can be imagined. But, unfortunately, if the contagion spreads to all the elements of our planet, the consequences of unloosing such a cataclysm can only be viewed with apprehension. Astronomers sometimes observe that a star of medium magnitude increases suddenly in size; a star invisible to the naked eye may become very brilliant and visible without any telescope the appearance of a Nova. This sudden flaring up of the star is perhaps due to transmutations of an explosive character like those which our wandering imagination is perceiving now a process that the investigators will no doubt attempt to realize while taking, we hope, the necessary precautions.

At the extreme, some even reported to the public that "eminent scientists" thought this chain reaction of radioactive decay might spread throughout the universe as a whole, destroying not just our planet but the entire cosmos. By the 1930s, Weart notes, "even schoolchildren had heard about the risk of a runaway atomic experiment."

Or perhaps radioactivity would bring about a dystopian nightmare: As Rutherford liked to say, "Some fool in a laboratory might blow up the universe unawares" by triggering a planetary chain reaction.

These were the grandiose promises and existential fears associated with radioactivity. They were promulgated by leading scientists, amplified by the media and so widely discussed that even children became familiar with them. What lay ahead, people were told, was a utopian world of limitless energy in which all societal problems will be solved. Or, on the other hand, radioactivity could bring about a dystopian nightmare in which, as Rutherford liked to say, "some fool in a laboratory might blow up the universe unawares" by inadvertently triggering a planetary chain reaction through some artificial radioactivity process.

The parallels with the current hype around AGI are striking. Today, one finds prominent figures like Andreessen and Altman proclaiming that AGI could solve virtually all our problems, ushering in a utopian world of "heightened material prosperity across the planet," "unlimited intelligence and energy" and human flourishing "to a degree that is probably impossible for any of us to fully visualize yet."

At the same time, Altman notes that the worst-case outcome of AGI could be "lights-out for all of us," meaning total human extinction, caused not by a planetary chain reaction but by a different exponential process called "recursive self-improvement," which some believe could trigger an "intelligence explosion." These doomsday prophecies have been further amplified by AI researchers like Geoffrey Hinton and Yoshua Bengio, both of whom won the Turing Award, often called the "Nobel Prize of Computing."

Meanwhile, the media has lapped up all this hype, both utopian and apocalyptic, amplifying these warnings of existential doom while also declaring that AGI could revolutionize our world for the better.

Want a daily wrap-up of all the news and commentary Salon has to offer? Subscribe to our morning newsletter, Crash Course.

Historians of science and technology have seen this all before. The details were different, but the hype wasn't. If the past is any guide to the future, the push to create AGI by building ever-larger "language models" the systems that power ChatGPT and other chatbots will end up a giant nothingburger despite the grand proclamations all over the media.

Furthermore, there is another important parallel between radioactivity in the early 20th century and the current race to create AGI. This was pointed out to me by Beth Singler, an anthropologist who studies the links between AI and religion at the University of Zurich. She notes that just as the dangers of the everyday uses of radioactivity were ignored, the harmful everyday uses of AI are being ignored in public discourse in favor of the potential AI apocalypse.

Not long after Marie Curie wowed audiences at a major scientific conference in 1900 with vials of radium "so active that they glowed with a pearly light," a physician who studied radioactivity with Marie Curie, Sabin Arnold von Sochocky, realized that adding radium to paint caused the paint to glow in the dark. He co-founded a company that began to manufacture this paint, which was used to illuminate aircraft instruments, compasses and watches. It proved especially useful during World War I, when soldiers beganto fasten their pocket watches to their wrists and needed a way to see the time in the dark trenches to synchronize their movements.

Exposure to the gamma rays emitted by radium poses a radiological hazard, however, which very likely caused Sochocky's own death at age 45. Worse, as Singler points out, throughout the 1910s and 1920s many women who painted these watches in factories owned by Sochocky and others came down with radiation poisoning; some died and others became extremely ill. Some, such as Amelia Maggia, died after suffering a number of horrendous health complications. Several months after Maggia quit her dial-painting job, "her lower jawbone and the surrounding tissue had so deteriorated that her dentist lifted her entire mandible out of her mouth." She passed away shortly after that.

The victims of this new industry, the women poisoned or killed who were known as the"radium girls,"were collateral damage of the push to get rich off radioactivity.

The victims of this industry were called the "radium girls," as most factory workers were young women. They were the unwitting collateral damage of a push by Sochocky and others to get rich off the hype surrounding radium. In reality, the radium industry both generated huge profits and caused great harm, leaving many workers with devastating illnesses and killing many others.

Similar points can be made about the race to create AGI. Lost in the cacophony of grand promises and apocalyptic warnings are myriad harms affecting artists, writers, workers in the Global South and marginalized communities.

For example, in building systems like ChatGPT, OpenAI hired a company that paid Kenyan workers as little as $1.32 per hour to sift through some of the darkest corners of the web. This included "examples of violence, hate speech, and sexual abuse," leaving many workers traumatized and without proper mental health care. OpenAI also used, without permission, attribution or compensation, an enormous amount of material generated by human writers and artists, which has resulted in lawsuits for intellectual property theft that are now going to court. Meanwhile, AI systems like ChatGPT are already taking people's jobs, and some worry about widespread unemployment as OpenAI and other companies develop more advanced AI programs.

While some of this has been reported by the media, it hasn't received nearly as much coverage as the dire warnings that AGI is right around the corner, and that once it arrives, it may kill everyone on Earth. Just as the rush to cash in on radium destroyed people's lives, so too is the race to build AGI leaving a trail of damage and destruction.

The lesson here is twofold: First, we should be skeptical of claims that AGI will either bring about a utopian paradise or annihilate humanity, as scientists and crackpots alike have made identical claims in the past. And second, we must not overlook the many profound harms that AGI hype tends to obscure. If I had to guess, I'd say that AGI is the new radium, that the bubble will burst soon enough, and that companies like OpenAI will have achieved little more than hurting innocent people in the process.

Read more

from mile P. Torres on the AI revolution

View original post here:

Will "godlike AI" kill us all or unlock the secrets of the universe ... - Salon

What is Artificial Intelligence (AI)? – Fagen wasanni

Artificial Intelligence (AI)

Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think and learn like humans. It involves the creation of intelligent machines that can perform tasks that typically require human intelligence. AI encompasses a wide range of technologies, including machine learning, natural language processing, computer vision, and robotics.

AI has the ability to learn from data, recognize patterns, and make logical decisions. It enables machines to understand and interpret complex information, solve problems, and perform tasks with precision and accuracy. AI systems can analyze vast amounts of data in real-time, making it possible to extract valuable insights and make informed decisions.

AI is used in various fields and industries, including healthcare, finance, manufacturing, transportation, and entertainment. It has the potential to revolutionize these industries by automating processes, improving efficiency, and enhancing decision-making capabilities.

There are two types of AI: narrow AI and general AI. Narrow AI is designed to perform specific tasks, such as speech recognition or image classification. General AI, on the other hand, possesses the ability to understand, learn, and apply knowledge across various domains, similar to human intelligence.

AI is driven by algorithms, which are sets of rules and instructions that guide the behavior of AI systems. These algorithms enable machines to learn from data, adapt to new information, and improve their performance over time.

Overall, AI has the potential to revolutionize the way we live and work. It has the ability to transform industries, improve productivity, and enhance our quality of life. However, it also raises important ethical and societal questions that need to be addressed, such as privacy, bias, and the impact on jobs. As AI continues to develop, it is crucial to strike a balance between innovation and responsible use to ensure that AI benefits humanity as a whole.

Artificial Intelligence (AI) has come a long way since its inception. The field of AI has evolved from basic rule-based systems to more advanced and sophisticated forms of AI. The evolution of AI can be categorized into three stages: weak AI, strong AI, and superintelligence.

Weak AI, also known as narrow AI, refers to AI systems that are designed to perform specific tasks within a limited scope. These systems are trained to excel at a single task, such as playing chess or recognizing human speech. Weak AI is prevalent in our daily lives, from virtual assistants like Siri and Alexa to recommendation systems that suggest products or movies based on our preferences.

Strong AI, also known as artificial general intelligence (AGI), represents AI systems that possess the ability to understand, learn, and apply knowledge across a wide range of tasks, similar to human intelligence. Strong AI aims to replicate human-like intelligence, reasoning, and problem-solving abilities. While we have made significant progress in AI, we have not yet achieved true strong AI. Current AI systems excel in specific tasks but lack the comprehensive understanding and adaptability that human intelligence offers.

Superintelligence is the hypothetical future stage of AI development, where AI systems surpass human intelligence in almost every aspect. It refers to AI systems that can outperform humans in cognitive tasks, including creative thinking, problem-solving, and decision-making. Superintelligence is a topic of active debate and speculation, with some experts warning about the potential risks associated with highly autonomous and intelligent AI systems.

The evolution of AI is driven by advancements in machine learning and deep learning algorithms. Machine learning algorithms enable AI systems to learn from data, recognize patterns, and make predictions. Deep learning algorithms, a subset of machine learning, mimic the neural networks of the human brain, enabling AI systems to perform tasks such as image and speech recognition with remarkable accuracy.

The future of AI holds great promise and potential. As AI continues to evolve, we can expect to see further advancements in the field of robotics, natural language processing, and computer vision. AI has the power to revolutionize industries, improve efficiency, and address complex challenges facing society, such as healthcare and climate change.

However, along with the potential benefits, there are also concerns surrounding the ethical and societal implications of AI. As AI becomes more integrated into our lives, issues such as job displacement, bias in decision-making, and the ethical use of AI need careful consideration.

Follow this link:

What is Artificial Intelligence (AI)? - Fagen wasanni