Archive for the ‘Artificial General Intelligence’ Category

Former OpenAI Researcher: Theres a 50% Chance AI Ends in ‘Catastrophe’ – Decrypt

A former key researcher at OpenAI believes there is a decent chance that artificial intelligence will take control of humanity and destroy it.

"I think maybe there's something like a 10-20% chance of AI takeover, [with] many [or] most humans dead, " Paul Christiano, who ran the language model alignment team at OpenAI, said on the Bankless podcast. "I take it quite seriously."

Christiano, who now heads the Alignment Research Center, a non-profit aimed at aligning AIs and machine learning systems with human interests, said that hes particularly worried about what happens when AIs reach the logical and creative capacity of a human being. "Overall, maybe we're talking about a 50/50 chance of catastrophe shortly after we have systems at the human level," he said.

Christiano is in good company. Recently scores of scientists around the world signed an online letter urging that OpenAI and other companies racing to build faster, smarter AIs, hit the pause button on development. Big wigs from Bill Gates to Elon Musk have expressed concern that, left unchecked, AI represents an obvious, existential danger to people.

Why would AI become evil? Fundamentally, for the same reason that a person does: training and life experience.

Like a baby, AI is trained by receiving mountains of data without really knowing what to do with it. It learns by trying to achieve certain goals with random actions and zeroes in on correct results, as defined by training.

So far, by immersing itself in data accrued on the internet, machine learning has enabled AIs to make huge leaps in stringing together well-structured, coherent responses to human queries. At the same time, the underlying computer processing that powers machine learning is getting faster, better, and more specialized. Some scientists believe that within a decade, that processing power, combined with artificial intelligence, will allow these machines to become sentient, like humans, and have a sense of self.

Thats when things get hairy. And its why many researchers argue that we need to figure out how to impose guardrails now, rather than later. As long as AI behavior is monitored, it can be controlled.

But if the coin lands on the other side, even OpenAIs co-founder says that things could get very, very bad.

This topic has been on the table for years. One of the most famous debates on the subject took place 11 years ago between AI researcher Eliezer Yudkowsky and the economist Robin Hanson. The two discussed the possibility of reaching foomwhich apparently stands for Fast Onset of Overwhelming Masterythe point at which AI becomes exponentially smarter than humans and capable of self improvement. (The derivation of the term foom is debatable.)

Eliezer and his acolytes believe its inevitable AIs will go 'foom' without warning, meaning, one day you build an AGI [artificial general intelligence] and hours or days later the thing has recursively self-improved into godlike intelligence and then eats the world. Is this realistic?" Perry Metzger, a computer scientist active in the AI community, tweeted recently.

Metzger argued that even when computer systems reach a level of human intelligence, theres still plenty of time to head off any bad outcomes. Is 'foom' logically possible? Maybe. Im not convinced," he said. "Is it real world possible? Im pretty sure no. Is long term deeply superhuman AI going to be a thing? Yes, but not a foom

Another prominent figure, Yann Le Cun, also raised his voice, claiming it is "utterly impossible," for humanity to experience an AI takeover. Lets hope so.

The rest is here:

Former OpenAI Researcher: Theres a 50% Chance AI Ends in 'Catastrophe' - Decrypt

OpenAI CTO Says AI Systems Should ‘Absolutely’ Be Regulated – Slashdot

Slashdot reader wiredmikey writes: Mira Murati, CTO of ChatGPT creator OpenAI, says artificial general intelligence (AGI) systems should be "absolutely" be regulated. In a recent interview, Murati said the company is constantly talking with governments and regulators and other organizations to agree on some level of standards. "We've done some work on that in the past couple of years with large language model developers in aligning on some basic safety standards for deployment of these models," Murati said. "But I think a lot more needs to happen. Government regulators should certainly be very involved." Murati specifically discussed OpenAI's approach to AGI with "human-level capability."OpenAI's specific vision around it is to build it safely and figure out how to build it in a way that's aligned with human intentions, so that the AI systems are doing the things that we want them to do, and that it maximally benefits as many people out there as possible, ideally everyone.

Q: Is there a path between products like GPT-4 and AGI?

A: We're far from the point of having a safe, reliable, aligned AGI system. Our path to getting there has a couple of important vectors. From a research standpoint, we're trying to build systems that have a robust understanding of the world similarly to how we do as humans. Systems like GPT-3 initially were trained only on text data, but our world is not only made of text, so we have images as well and then we started introducing other modalities.

The other angle has been scaling these systems to increase their generality. With GPT-4, we're dealing with a much more capable system, specifically from the angle of reasoning about things. This capability is key. If the model is smart enough to understand an ambiguous direction or a high-level direction, then you can figure out how to make it follow this direction. But if it doesn't even understand that high-level goal or high-level direction, it's much harder to align it. It's not enough to build this technology in a vacuum in a lab. We really need this contact with reality, with the real world, to see where are the weaknesses, where are the breakage points, and try to do so in a way that's controlled and low risk and get as much feedback as possible.

Q: What safety measures do you take?

A: We think about interventions at each stage. We redact certain data from the initial training on the model. With DALL-E, we wanted to reduce harmful bias issues we were seeing... In the model training, with ChatGPT in particular, we did reinforcement learning with human feedback to help the model get more aligned with human preferences. Basically what we're trying to do is amplify what's considered good behavior and then de-amplify what's considered bad behavior. One final quote from the interview: "Designing safety mechanisms in complex systems is hard... The safety mechanisms and coordination mechanisms in these AI systems and any complex technological system [are] difficult and require a lot of thought, exploration and coordination among players."

Read this article:

OpenAI CTO Says AI Systems Should 'Absolutely' Be Regulated - Slashdot

USC Cinematic Arts – University of Southern California

By Desa Philadelphia

Is AI Creative?That was the central question of discussion at a forum held at the USC School of Cinematic Arts (SCA) on Wednesday April 26, that brought together specialists in engineering, computer science, and filmmaking to talk about the capabilities, and limitations, of platforms like ChatGPT, Midjourney and DALL.E.The event, AI, Creativity & The Future of Film, was conceived by SCA alumnus Jon Dudkowski, a director and editor whose credits include Star Trek: Discovery, and Karim Jerbi, a Visiting Scholar at the Brain Imaging Group at USCs Ming Hsieh Institute, which is in the Department of Electrical Engineering at the USC Viterbi School of Engineering. Sponsored by Adobe, and presented as a joint effort between SCA and the USC Viterbi School of Engineering, the evening was an exercise in level-setting, to dispel myths about what AI is currently capable of creating. The answer? Nobodys job is in dangeryet.

The night began with a presentation by Yves Bergquist, Director of the AI & Blockchain in Media Projectat USCs Entertainment Technology Center at the School of Cinematic Arts, on the science behind the most popular emerging platforms. He explained the generative models at the heart of the technologiesfrom the Transformers like ChatGPT, which is able to sequence data to produce text for essays, prose, poetry, scripts etc.; through Diffusion models, like DALL.E, which add and then remove noise from existing images to create new ones; to efforts at integrating existing models. He then offered this definitive assessment of ChatGPT, the essay-writing bot that has been at the center of plagiarism concerns across the university: It is very good at writing bad and boring text. It is not going to be able to write a story. It is not going to be able to write a script. It does not understand the world at a level of symbolism, at a level of depth that we understand.

Jerbi took the standing-room-only audience through demonstrations of the kinds of experiments being done by researchers in Neuro-AI, a new field of inquiry that compares the brain activities of humans and machines that are performing the same tasks. The goal is to compare the biological networks of the brain with the artificial ones. We are seeing tremendous progress in AI but still far from human level intelligence, said Jerbi. Some things a toddler can do that the most advanced AI cant do. Jerbi however offered this discomforting fact. The next generation of AI, dubbed Artificial General Intelligence, is focused on closing that gap. The key word is General meaning the ability to apply instructions innovatively. Todays AI might use a hammer to just hit the nail it is instructed to pound; but generalized intelligence might then apply the hammer in breaking up rocks, without being told.Dudkowski then moderated a panel discussion in which Bergquist and Jerbi were joined by filmmakers Chad Nelson, whose film Critterz features characters created using DALL.E; Mary Sweeney, who produced and/or edited several David Lynch projects, and SCA alumna Athena Wickham, the executive producer of Westword and The Peripheral. They were also joined by William Swartout, Chief Technology Officer of the USC Institute for Creative Technologies.

No one on the panel yielded to any suggestion that AI, in and of itself, can be creative. Instead the consensus was that as a tool, it could facilitate faster iterations of works like script drafts, storyboards and production design. What excites me for myself is being able to use it like a tool to accelerate the process and to see what you have and dont have more quickly and inexpensively, said Wickham. What scares me is people getting lazy with it. I do worry that Im going to start getting a lot of scripts and pitches that feel like someone hasnt taken the time to edit it and put their own spin on it and thats going to piss me off.

Nelson concurred: I personally havent seen an AI image where I think thats all that needs to happen, its done. It doesnt know good from bad. Someone still has to say thats good.Sweeney worried that AI platforms will encourage more of the kind of device addiction that has been linked to depression in young people. But she described her approach as cautiously curious and compared new approaches to the shift from analog to digital film editing. Im always interested in new tools.

Essentially reading the room, Swartout acknowledged the attention AI platforms have been receiving in the press lately, and succinctly summarized the state of AI creativity at this moment: In the popular mind we are going to think we are much further than we are.

Originally posted here:

USC Cinematic Arts - University of Southern California

AI, the Future of Work, and the Education That Prepares You … – City University of New York

If artificial intelligence (AI) and machine learning are to become so advanced as to compete with highly trained professionals, such as accountants, lawyers, and doctors, writes Baruch President S. David Wu in his April blog post, it may be time for us to think how this ups the game for humans. His advice for students: Embrace the kind of education that will best prepare you for a future where AI, machine learning, and other forms of white collar automation are reality.

In late March, I chatted with a few Baruch faculty members after an annual cross-college research symposium. Over the course of our conversation, the topic turned to artificial intelligence (AI)specifically ChatGPTand whether it is any different from other high-tech breakthroughs we have experienced in our lifetime.

Two opposing views quickly emerged: first, AI is similar to other technological toolssuch as smartphones and the internetthat altered how we do things and we will learn to adapt to it too; and second, AI is entirely different from anything we have seen before and will change every facet of our lives, replace a massive number of jobs, and create an element of uncertainty that could make the doomsday scenario of robots taking over the world a real possibility. Like other predictions of this kind, I believe it is important to give them due respect and consideration, because when we understand the risks of a technology, we can take steps to mitigate those risks.

On college campuses, another obvious concern is the potential for widespread cheating using the software, a problem that has received a great deal of national attention. Because numerous articles have already addressedquite eloquentlyhow to ensure academic integrity in the face of ChatGPT and other tools, I wont spend time discussing that topic here. Id rather focus on a broader issue: If AI and machine learning are to become so advanced as to compete with highly trained professionals, such as accountants, lawyers, and doctors, it may be time for us to think how this ups the game for humans. As for our students, you need to embrace the kind of education that will best prepare you for a future where AI, machine learning, and other forms of white collar automation are reality.

For millennia, humans existed as members of agrarian societiesuntil the last three centuries, when expeditious advancements in technology resulted in dramatic shifts in the way we think and live. The steam engine, initially invented in 1712 and then improved upon over several decades, helped usher in the Industrial Revolution. The internal combustion engine, first invented around 1860, was revolutionary for transportation and travel. Access to electricity, signified by the invention of the modern light bulb in 1880, then set the stage for life today.

Between the 1880s and 1920s, the combination of the telephone, wireless telegraph, and radio made it possible to broadly share news and information, connecting the world like never before. The popularization of television in the 1950s quickly transformed mass communication and merged it with entertainment, creating an even-larger shared cultural experience. More recently, the computer and internet further expanded communication and commerce and helped create a truly global economy and society.

History shows us that society has been continuously and profoundly impacted by technological advances. While affected unequally by the associated social changes and transitions, humans have always found ways to adapt to new realities. With each generational innovation, we adjusted our responsibilitiesas well as our expectations and aspirationsby adopting new skills that worked with the technology, which moved mankind to a higher rung on the evolutionary ladder. Through this timeline, we can see that the human race has not only survived and prevailed, butby most objective measuresthrived. I think of AI as the next step of that evolutionary process.

When I started my career as a professor more than three decades ago, I taught a series of graduate courses in artificial intelligencepartly because I was fascinated by the technology and its potential. While I am astonished by the rapid developments made in AI since then, most of the basic principles that created programs such as ChatGPT were known years ago.

The program is a large language model that can be trained to mimic the human brain in its capacity to collect data, detect patterns, learn, and evolve. ChatGPT was trained on massive amounts of data, millions of documents, and trillions of wordsleveraging rapid advancements in computing power that allow mere concepts to become reality. What amazes people is ChatGPTs ability to generate massive strings of computer code or even creative text formatssuch as emails, letters, poems, scripts, and musical piecesand to do so at a rate of 200 words per second.

Even more impressive is the rate of advancement. OpenAI, the company that created the product, said its latest version, GPT-4, recently passed a simulated law school bar exam with a score around the top 10 percent of test takers. OpenAI also reported GPT-4performed strongly on the LSAT, GRE, SAT, and many AP exams. By contrast, less than a year ago, the prior version scored in the bottom 10 percent on those same exams.

With the development and rapid progress of AI, the fear of human replacement is very realand not completely unfounded.

ChatGPT and similar programs have already been used for a variety of tasks in businessesat accounting firms, it has automated data entry and saved companies millions; at law firms to research legal precedents; and at small businesses to generate marketing materials. As this technology continues to evolve, it is likely we will see even more innovative applications in the not-so-distant future that go beyond replacing routine duties.

In fact, the next major milestone for AI and machine learning is artificial general intelligence (AGI). According to Stanford Encyclopedia of Philosophy, AGI is a computer system that would have the ability to understand or learn any intellectual task that a human being can.Some experts believe AGI is achievable in the near future, while others believe that it is still decades away. As the field of AGI is rapidly evolving, it is possible we will see significant progress in the years to come. Further, it is conceivable that a large amount of work currently done by human beings will be replaced by such mechanismsat least ones that have well-defined tasks.

An October 2020 report by the World Economic Forum projected that, while machines with AI will assume about 85 million jobs by 2025, nearly 100 million jobs will be created simultaneously thanks to the same technology. There is no doubt then that the best way for all professionalsespecially you, our studentsto prepare for and adapt to these forthcoming opportunities is to understand how your chosen path is likely to evolve and to develop the parts of yourselves that cannot be replaced by technology. This means cultivating creativity, empathy, and other qualities that make us uniquely human. It means finding new ways to connect and build meaningful relationships. It also means learning to think critically and to question assumptions, rather than simply following established conventions. While AI systems can appear to be creative, that is achieved largely by permutation (i.e., trying millions of combinations by brute force and picking the best one) and not by imagination. By thinking outside the box, human beings are able to come up with thoughtful and far-reaching solutions.

The education that prepares you for the future is one that requires you to move beyond traditional classrooms and learn how to be human again. It is more important than ever that you not only hone your technical skills but develop excellent social and emotional intelligence. I made the point earlier that exposure to art, music, and other cultures expands our experiential base and our ability to relate to others, to learn how to work collaboratively, communicate effectively, and build relationships based on trust, respect, and mutual understanding. These are the foundations of ethical decision making and problem solving.

The education that prepares students for the future is one that develops them into changemakers and leaders as the world makes giant leaps under our watchful eyes. It is an education that instills in us an ability to learn and a love for learning, for lifenot only the latest professional skills or the highest level of digital savvy, but everything that is needed to explore, grow, and overcome. If you are lucky, you will also develop a sense of purpose and meaning in your lifefinding fulfillment and satisfaction in whatever you pursue. And that is how we prepare for the future.

Read more from the original source:

AI, the Future of Work, and the Education That Prepares You ... - City University of New York

How artificial intelligence is already powering work in B.C. – Business in Vancouver

The rapid pace of global AI development has some calling for a clutch, if not a brake

By Nelson Bennett, ChatGPT | May 1, 2023, 3:30pm

Sanctuary AIs general-purpose robots are remotely piloted or supervised by people|Sanctuary Ai

General-purpose robots that may soon be able to assume manual tasks performed by astronauts in space. Programs for self-driving cars that understand human behaviour. Developing new drugs to fight cancer.

These are some of the novel ways in which B.C. companies are using machine learning and artificial intelligence and to the clear potential benefit of humanity.

But like nuclear fission, machine super-intelligence is a Promethean power with the potential to be corrupted, which is why there is now a sudden push to erect guardrails and develop ethical guidelines and regulations before AI either becomes autonomous or simply falls into the wrong hands.

Elon Musk and Yoshua Bengio, a Canadian pioneer in deep learning, are among the more than 27,000 people who have signed an open letter calling for a six-month moratorium at all AI labs, until concerns about it can be addressed. Just last week, KPMG convened what might be described as an emergency summit in Vancouver to discuss AI and the opportunities and challenges this rapidly developing technology presents.

The purpose of it was really to start a conversation around whats becoming very clearly a very transformative piece of technology that is just accelerating in terms of its adoption, said Walter Pela, regional managing partner for KPMG. Theres obviously concerns and issues. At the same time, it is a tool thats being adopted.

In fact, its being adopted by businesses in the U.S. a lot faster than in Canada, according to a KPMG survey released last week.

The pace in Canada right now of AI adoption in business is about half of what it is in the U.S., according to a recent poll we did in February, Pela said.

Vancouver does not have pure-play AI companies or institutes, like Montreals Mila research institute, but it has developed a hub of applied AI companies.

Computer scientists have been developing machine learning and artificial intelligence for decades. But it wasnt until San Fracisco, Calif.-based OpenAI made its ChatGPT-3 chat bot available to the public that ordinary people got to see just how powerful this one type of AI already is.

The pace of Open AIs progress has generated both awe and alarm.

Some of the concerns around generative AI programs, like ChatGPT, is that they could be used for fraud, cybercrime and the amplification of misinformation. Another concern is that its level of disruption at least similar in scale to that of the internet, if not greater could put a lot of people in creative fields and knowledge industries out of work in fairly short order.

ChatGPT is just one type of generative AI technology that has the capacity to generate text, images, videos or music that look or sound like they were created by humans.

ChatGPT is text-based, and is basically like a super digital library containing a massive corpus of text from the Internet a library with the ability to learn, to respond to commands and to write anything from song lyrics to HTML code for websites, all in about 30 seconds. You can ask it to write an essay on virtually any topic, and then, half a minute later, ask to have that essay rewritten it in almost any language.

Diffusion AI is a text-to-image model. Diffusion AI programs like DALL-E, Midjourney and Stable Diffusion have the potential to displace illustrators. In fact, that may be the biggest immediate threat that AI poses not rogue machines turning their human masters into servants, but sudden, massive displacement of workers in certain industries, such as web design.

A Vancouver company called Durable, for example, uses AI for a program that can build basic websites for any type of business in 30 seconds.

Any knowledge worker that is trained to do certain things and already theyre interfacing in the digital realm thats the first thing that gets impacted, said Handol Kim, CEO of Variational AI and a board director for AInBC. So, content writers? Absolutely already happening. Graphic design, already happening. Lawyers? Starting to happen. Accountants, starting to happen. Software developers? Already youre getting decent code. Its not great, but its not bad. Heres the thing it gets better. Next year, it will get twice as good. The year after that, it will get five times as good.

Eventually it will be able to make movies. Anything thats represented digitally and can be manipulated digitally, eventually it can get to a level thats uncanny.

I think its fairly clear that there will be job dislocation in fairly short order, I think, said Steve Lowry, executive director of AInBC. For fastest change, I think well see in the creative realm generative AI changing the job of designers, photographers, marketers like overnight basically.

Though AI threatens to make some jobs obsolete, it also creates new opportunities including jobs in applied AI.

A number of companies in Vancouver are using various types of machine learning and AI for a wide range of applications.

Sanctuary AI, a B.C. company co-founded by Suzanne Gildert and Geordie Rose the founder of D-Wave Systems, which built the worlds first quantum computer is using AI in the development humanoid general-purpose robots.

The company is using AI to develop a cognitive architecture for its robots that will mimic the different subsystems in a persons brain. The company expects the robots could be used to replace humans to do work that is dangerous, tedious or in the vacuum of space.

In the not-too-distant future, Sanctuary technology will help people explore, settle, and prosper in outer space, the company said in a news release last year, after securing $75 million in a Series A financing round.

Inverted AI is a Vancouver company that uses deep learning and generative AI to understand the behaviour of drivers, cyclists and pedestrians, for companies developing self-driving vehicles.

Companies developing self-driving cars or advanced driver-assistance systems use simulators. Inverted AI helps to add the irrational human element to those simulations by recording traffic with a drone and then using machine learning to learn how humans behave in traffic.

We record how people behave on the road, both as drivers but also as pedestrians, cyclists and so on, and we use that to improve the realism of simulations for self-driving cars, said Inverted AI CTO Adam Scibior, an adjunct professor at the University of British Columbias computer science department. We basically make those more realistic.

Variational AI is using a type of machine learning variational auto-encoder to identify small molecules that will bind to protein kinases associated with cancer and tumors. But there are about 500 protein kinases in the human genome, all similar in structure, and finding the right molecule to bind only to kinases associated with cancers is a massive trial-and-error challenge.

If you have a small molecule that binds to one kinase, its going to bind to many others, and you dont want that, Handol Kim explained.

Rather than hunt for pre-existing molecules, then, Variational AI uses generative machine learning to make new molecules. In other words, rather than trying to find the right key out of hundreds of options, Variational AI is using machine learning to just cut new keys.

The generative chemistry process the company uses has the potential to dramatically accelerate the drug discovery process.

It can take a decade and up to $1 billion to $2 billion to take a new drug through clinical trials and approval for use. Kim said using machine learning may be able to dramatically reduce both the time and costs associated with new drug discovery.

What were trying to do is turn years into months, Kim said. Were trying to turn pre-clinical development, move it from hundreds of millions of dollars to single-digit millions.

nbennett@biv.com

twitter.com/nbennett_biv

See the original post here:

How artificial intelligence is already powering work in B.C. - Business in Vancouver