Archive for the ‘Artificial General Intelligence’ Category

How Auto-GPT will revolutionize AI chatbots as we know them – SiliconANGLE News

Artificial intelligence chatbots such as OpenAI LPs ChatGPT have reached a fever pitch of popularity recently not just for their ability to hold humanlike conversations, but because they can perform knowledge tasks such as research, searches and content generation.

Now theres a new contender taking social media by storm that extends the capabilities of OpenAIs offering by automating its abilities even further:Auto-GPT. Its part of a new class of AI tools called autonomous AI agents that take the power of GPT-3.5 and GPT-4, the generative AI technologies behind ChatGPT, to approach a task, build on its own knowledge, and connect apps and services to automate tasks and perform actions on the behalf of users.

ChatGPT might seem magical to users for its ability to answer questions and produce content based on user prompts, such as summarizing large documents or generating poems and stories or writing computer code. However, its limited in what it can do because its capable of doing only one task at a time. During a session with ChatGPT, a user can prompt the AI with only one question at a time and refining those prompts or questions can be a slow and tedious journey.

Auto-GPT, created by game developer Toran Bruce Richards, takes away these limitations by allowing users to give the AI an objective and a set of goals to meet. Then it spawns a bot that acts like a person would, using OpenAIs GPT model to perform AI prompts in order to approach that goal. Along the way, it learns to refine its prompts and questions in order to get better results with every iteration.

It also has internet connectivity in order to gather additional information from searches. Moreover, it has short- and long-term memory through database connections so that it can keep track of sub-tasks. And it uses GPT-4 to produce content such as text or code when required. Auto-GPT is also capable of challenging itself when a task is incomplete and filling in the gaps by changing its own prompts to get better results.

According to Richards, although current AI chatbots are extremely powerful, their inability to refine their own prompts on the fly and automate tasks is a bottleneck. This inspiration led me to develop Auto-GPT, which can apply GPT-4s reasoning to broader, more complex problems that require long-term planning and multiple steps,he told Vice.

Auto-GPT is available as open source on GitHub. It requires an application programming interface key from OpenAI to access GPT-4. And to use it, people will need to install Python and a development environment such as Docker or VS Code with a Dev Container extension. As a result, it might take a little bit of technical knowhow to get going, though theres extensive setup documentation.

In a text interface, Auto-GPT asks the user to give the AI a name, a role, an objective and up to five goals that it should reach. Each of these defines how the AI agents will approach the action the user wants and how it will deliver the final product.

First, the user sets a name for the AI, such as RestaurantMappingApp-GPT, and then set a role, such as Develop a web app that will provide interactive maps for nearby restaurants. The user can then set a series of goals, such as Write a back-end in Python and Program a front end in HTML, or Offer links to menus if available and Link to delivery apps.

Once the user hits enter, Auto-GPT will begin launching agents, which will produce prompts for GPT-4, then approach the original role and each of the different goals. Finally, it will then begin refining and recursing through the different prompts that will allow it to connect to Google Maps using Python or JavaScript.

It does this by breaking the overall job into smaller tasks to work on each, and it uses a primary monitoring AI bot that acts as a manager to make sure that they coordinate. This particular prompt asks the bot to build a somewhat complex app that could go awry if it doesnt keep track of a number of different moving parts, so it might take a large number of steps to get there.

With each step, each AI instance will narrate what its doing and even criticize itself in order to refine its prompts depending on its approach toward the given goal. Once it reaches a particular goal, each instance will finalize its process and return its answer back to the main management task.

Trying to get ChatGPT or even the more advanced, subscription-based GPT-4 to do this without supervision would take a large number of manual steps that would have to be attended to by a human being. Auto-GPT does them on its own.

The capabilities of Auto-GPT are beneficial for neophyte developers looking to get ahead in the game, Brandon Jung, vice president of ecosystem at AI-code completion tool provider Tabnine Ltd., told SiliconANGLE.

One benefit is that its a good introduction for those that are new to coding, and it allows for quick prototyping, Jung said. For use cases that dont require exactness or have security concerns, it could speed up the creation process without having to be part of a broader system that includes an expert for review.

Being able to build apps rapidly, including all the code all at once, from a simple series of text prompts would bring a lot of new templates for code into the hands of developers. Essentially providing them with rapid solutions and foundations to build on. However, they would have to go through a thorough review first before being put into production.

Thats just one example of Auto-GPTs capabilities. With its capabilities, it has wide-reaching possibilities that are currently being explored by developers, project managers, AI researchers and anyone else who can download its source code.

There are numerous examples of people using Auto-GPT to do market research, create business plans, create apps, automate complex tasks in pursuit of a goal, such as planning a meal, identifying recipes and ordering all the ingredients, and even execute transactions on behalf of the user, Sheldon Monteiro, chief product officer at the digital business transformation firm Publicis Sapient, told SiliconANGLE.

With its ability to search the internet, Auto-GPT can be tasked with quick market research such as Find me five gaming keyboards under $200 and list their pros and cons. With its ability to break a task up into multiple subtasks, the autonomous AI could then rapidly search multiple review sites, produce a market research report and come back with a list of gaming keyboards that come in under that amount and supply their prices as well as information about them.

A Twitter user named MOEcreated an Auto-GPT bot named Isabella that can autonomously analyze market data and outsource to other AIs. It does so by using the AI framework Lang-chain to gather data autonomously and do sentiment analysis on different markets.

Because Auto-GPT has access to the internet, and it can take actions on behalf of the user, it can also install applications. In the case of Twitter user Varun Mayya, who ask the bot to build some software, it discovered that he did not have Node.js installed an environment that allows JavaScript to be run locally instead of in a web browser. As a result, it searched the internet, discovered a StackOverflow tutorial and installed it for him so it could proceed with building the app.

Auto-GPT isnt the only autonomous agent AI currently available. Another that has come into vogue isBabyAGI, which was created by Yohei Nakajima, a venture capitalist and artificial intelligence researcher. AGI refers to artificial general intelligence, a hypothetical type of AI that would have the ability to perform any intellectual task but no existing AI is anywhere close. BabyAGI is a Python-based task management system that uses the OpenAI API, like Auto-GPT, that prioritizes and builds new tasks toward an objective.

There are alsoAgentGPT and GodMode, which are much more user-friendly in that they use a web interface instead of needing an installation on a computer, so they can be accessed as a service. These services lower the barrier to entry by making it simple for users because they dont require any technical knowledge to use and will perform similar tasks to Auto-GPT, such as generating code, answering questions and doing research. However, they cant write documents to the computer or install software.

These tools do have drawbacks, however, Monteiro warned. The examples on the internet are cherry-picked and paint the technology in a glowing light. For all the successes, there are a lot of issues that can happen when using it.

It can get stuck in task loops and get confused, Monteiro said. And those task loops can get pretty expensive, very fast with the costs of GPT-4 API calls. Even when it does work as intended, it might take a fairly lengthy sequence of reasoning steps, each of which eats up expensive GPT-4 tokens.

Accessing GPT-4 can cost money that varies depending on how many tokens are used. Tokens are based on words or parts of phrases sent through the chatbot. Charges range from three cents per 1,000 tokens for prompts to six cents per 1,000 tokens for results. That meansusing Auto-GPT running through a complex project or getting stuck in a loop unattended could end up costing a few dollars.

At the same time, GPT-4 can be prone to errors, known as hallucinations, which could spell trouble during the process. It could come up with totally incorrect or erroneous actions or, worse, produce insecure or disastrously bad code when asked to create an application.

[Auto-GPT] has the ability to execute on previous output, even if it gets something wrong it keeps going on, said Bern Elliot, a distinguished vice president analyst at Gartner. It needs strong controls to avoid it going off the rails and keeping on going. I expect misuse without proper guardrails will cause some damaging unexpected and unintended outcomes.

The software development side could be equally problematic. Even if Auto-GPT doesnt make a mistake that causes it to produce broken code, which would cause the software to simply fail, it could create an application riddled with security issues.

Auto-GPT is not part of a full software development lifecycle testing, security, et cetera nor is it integrated into an IDE, Jung said, warning about the potential issues that could arise from the misuse of the tool. Abstracting complexity is fine if you are building on a strong foundation. However, these tools are by definition not building strong code and are encouraging bad and insecure code to be pushed into production.

Tools such as Auto-GPT, BabyAGI, AgentGPT and GodMode are still experimental, but there are broader implications in how they could be used to replace routine tasks such as vacation planning or shopping, explained Monteiro.

Right now, Microsoft has even developed simple examples of a plugin for Bing Chat. It allows users to ask it to offer them dinner suggestions that will have its AI which is powered by GPT-4 will roll up a list of ingredients and then launch Instacart to have them prepared for delivery. Although this is a step in the direction of automation, bots such as Auto-GPT are edging toward a potential future of all-out autonomous behaviors.

A user could ask for Auto-GPT to look through local stores, prepare lists of ingredients, compare prices and quality, set up a shopping cart and even complete orders autonomously. At this experimental point, many users may not be willing to allow the bot to go all the way through with using their credit card and deliver orders all on its own, for fear that it could go haywire and send them several hundred bunches of basil.

A similar future where an AI does this for travel agents using Auto-GPT may not be far away. Give it your parameters beach, four-hour max travel, hotel class and your budget, and it will happily do all the web browsing for you, comparing options in quest of your goal, said Monteiro. When it is done, it will present you with its findings, and you can also see how it got there.

As these tools begin to mature, they have a real chance of providing a way for people to automate away mundane step-by-step tasks that happen on the internet. That could have some interesting implications, especially in e-commerce.

How will companies adapt when these agents are browsing sites and eliminating your product from the consideration set before a human even sees the brand? said Monteiro. From an e-commerce standpoint, if people start using Auto-GPT tools to buy goods and services online, retailers will have to adapt their customer experience.

THANK YOU

Read the original:

How Auto-GPT will revolutionize AI chatbots as we know them - SiliconANGLE News

Artificial Intelligence Godfathers Call for Regulation as Rights … – Democracy Now!

This is a rush transcript. Copy may not be in its final form.

AMY GOODMAN: This is Democracy Now!, democracynow.org, The War and Peace Report. Im Amy Goodman, with Nermeen Shaikh.

We begin todays show looking at growing alarm over the potential for artificial intelligence to lead to the extinction of humanity. The latest warning comes from hundreds of artificial intelligence, or AI, experts, as well as tech executives, scholars and others, like climate activist Bill McKibben, who signed onto an ominous, one-line statement released Tuesday that reads, quote, Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Among the signatories to the letter, released by the Center for AI Safety, is Geoffrey Hinton, considered one of three godfathers of AI. He recently quit Google so he could speak freely about the dangers of the technology he helped build, such as artificial general intelligence, or AGI, in which machines could develop cognitive abilities akin or superior to humans sooner than previously thought.

GEOFFREY HINTON: I had always assumed that the brain was better than the computer models we had. And Id always assumed that by making the computer models more like the brain, we would improve them. And my epiphany was, a couple of months ago, I suddenly realized that maybe the computer models we have now are actually better than the brain. And if thats the case, then maybe quite soon theyll be better than us, so that the idea of superintelligence, instead of being something in the distant future, might come much sooner than I expected.

For the existential threat, the idea it might wipe us all out, thats like nuclear weapons, because nuclear weapons have the possibility they would just wipe out everybody. And thats why people could cooperate on preventing that. And for the existential threat, I think maybe the U.S. and China and Europe and Japan can all cooperate on trying to avoid that existential threat. But the question is: How should they do that? And I think stopping development is infeasible.

AMY GOODMAN: Many have called for a pause on introducing new AI technology until strong government regulation and a global regulatory framework are in place.

Joining Hinton in signing the letter was a second AI godfather, Yoshua Bengio, who joins us now for more. Hes a professor at the University of Montreal, founder and scientific director of the Milathe Quebec Artificial Intelligence Institute. In 2018, he shared the prestigious computer science prize, the Turing Award, with Geoffrey Hinton and Yann LeCun.

Professor Bengio is a signatory of the Future of Life Institute open letter calling for a pause on large AI experiments.

Professor Bengio, welcome to Democracy Now! Its great to have you with us as we talk about an issue that I think most people cannot begin to comprehend. So, if you could start off by talking about why youve signed this letter warning of extinction of humanity? But talk about what AI is, first.

YOSHUA BENGIO: Well, thanks for having me, first. And thanks for talking about this complicated issue that requires more awareness.

The reason I signed this and like Geoff, I changed my mind in the last few months. What triggered this change for me is interacting with ChatGPT and seeing how far we had moved, much faster than I anticipated. So, I used to think that reaching human-level intelligence with machines could take many more decades, if not centuries, because the progress of science seemed to be, well, slow. And we were as researchers, we tend to focus on what doesnt work. But right now we have machines that pass what is called the Turing test, which means they can converse with us, and they could easily fool us as being humans. That was supposed to be a milestone for, you know, human-level intelligence.

I think theyre still missing a few things, but that kind of technology could already be dangerous to destabilize democracy through disinformation, for example. But because of the research that is currently going on to bridge the gap with what is missing from current large language models, large AI systems, it is possible that my, you know, horizon that I was seeing as many decades in the future is just a few years in the future. And that could be very dangerous. It suffices that just a small organization or somebody with crazy beliefs, conspiracy theory, terrorists, a military organization decides to use this without the right safety mechanisms, and it could be catastrophic for humanity.

NERMEEN SHAIKH: So, Professor Yoshua Bengio, it would be accurate then to say that the reason artificial intelligence and concerns about artificial intelligence have become the center of public discussion in a way that theyve not previously been, because the advances that have occurred in the field have surprised even those who are participating in it and the lead researchers in it. So, if you could elaborate on the question of superintelligence, and especially the concerns that have been raised about unaligned superintelligence, and also the speed at which we are likely to get to unaligned superintelligence?

YOSHUA BENGIO: Yeah. I mean, the reason it was surprising is that in the current systems, from a scientific perspective, the methods that are used are not very different from the things we only knew just a few years ago. Its the scale at which they have been built, the amount of data, the amount of engineering, that has made this really surprising progress possible. And so we could have similar progress in the future because of the scale of things.

Now, the problem first of all, you know, theres an important why do we why are we concerned about superintelligence? So, first of all, the question is: Is it even possible to build machines that will be smarter than us? And the consensus in the scientific community, for example, from the neuroscience perspective, is that our brain is a very complicated machine, so theres no reason to think that, in principle, we couldnt build machines that would be at least as smart as us. Now, then theres the question of how long its going to take. But weve discussed that. In addition, as Geoff Hinton was saying in the piece that was heard, computers have advantages that brains dont have. For example, they can talk to each other at very, very high speed and exchange information. For us, we are limited by the very few bits of information per second that language allows us to do. And that actually gives them a huge advantage to learn a lot faster. So, for example, these systems today already can read the whole internet very, very quickly, whereas a human would require 10,000 years of their life reading all the time to achieve the same thing. So, they can have access to information and sharing of information in ways that humans dont. So its very likely that as we make progress towards understanding the principles behind human intelligence, we will be able to build machines that are actually smarter than us.

So, why is it dangerous? Because if theyre smarter than us, they might act in ways that are not that do not agree with what we intend, what we want them to do. And it could be for several reasons, but this question of alignment is that its actually very difficult to state to instruct a machine to behave in a way that agrees with our values, our needs and so on. We can say it in language, but it might be understood in a different way, and that can lead to catastrophes, as has been argued many times.

But this is something that already happens I mean, this alignment problem already happens. So, for example, you can think of corporations not being quite aligned with what society wants. Society would like corporations to provide useful goods and services, but we cant, like, dictate that to corporations directly. Instead, weve given them a framework where they maximize profit under the constraints of laws, and that may work reasonably but also have side effects. For example, corporations can find loopholes in those laws, or, even worse, they could influence the laws themselves.

And this sort of thing can happen with AI systems that were trying to control. They might find ways to satisfy the letter of our instructions, but not the intention, the spirit of the law. And thats very scary. We dont fully understand how these scenarios can unfold, but theres enough danger and enough uncertainty that I think a lot of attention more attention should be given to these questions.

NERMEEN SHAIKH: If you could explain whether you think it will be difficult to regulate this industry, artificial intelligence, despite all of the advances that have already occurred? How difficult will regulation be?

YOSHUA BENGIO: Even if something seems difficult, like dealing with climate change, and even if we feel that its a hard task to do the job and to convince enough people and society to change in the right ways, we have a moral duty to try our best.

And the first things we have to do with AI risks is get on with regulation, set up governance frameworks, both in individual countries and internationally. And when we do that, its going to be useful for all the AI risks because weve been talking a lot about the extinction risk, but there are other risks that are shorter-term, risks to destabilize democracy. If democracy is destabilized, this is bad in itself, but it actually is going to also hurt in our abilities to fight to deal with the existential risk.

And then there are other risks that are actually going on with AI discrimination, bias, privacy and so on. So we need to beef up that legislative and regulatory body. And what we need there is a regulatory framework thats going to be very adaptive, because theres a lot of unknown. Its not like we know precisely how bad things can happen. We need to do a lot more in terms of monitoring, validating, and we need and controlling access so that not any bad actor can easily get their hands on dangerous technologies. And we need the body that will regulate, or the bodies across the world, to be able to change their rules as new nefarious users show up or as technology advances. And thats a challenge, but I think we need to go in that direction.

AMY GOODMAN: I want to bring Max Tegmark into the conversation. Max Tegmark is MIT professor focused on artificial intelligence, his recent Time magazine article, The Dont Look Up Thinking That Could Doom Us With AI.

If you could explain that point, Professor Tegmark?

MAX TEGMARK: Yes.

AMY GOODMAN: And also, why you think right now you know, many people have just heard the term ChatGPT for the first time in the last months. The general public has become aware of this. And how you think it is most effective to regulate AI technology?

MAX TEGMARK: Yeah. Thank you for the great question.

I wrote this piece comparing whats happening now in AI with the movie Dont Look Up, because I really [inaudible] were all living this film. Were, as a species, confronting the most dramatic thing that has ever happened to us, where we may be losing control over our future, and almost no one is talking about it. So Im so grateful to you and others for actually starting to have that conversation now. And thats, of course, why we had these open letters that you just referred to here, to really help mainstream this conversation that we have to have, that people previously used to make fun of you when you even brought up the idea that we could actually lose control of this and go extinct, or example.

NERMEEN SHAIKH: Professor Tegmark, youve drawn analogies, in fact, when it comes to regulation, with the regulations that were put in place on biotech and physics. So, could you explain how that might apply to artificial intelligence?

MAX TEGMARK: Yeah. To appreciate what a huge deal this is, when the top scientists in AI are warning about extinction, its good to compare with the other two times in history that its happened, that leading scientists warned about the very thing they were making. It happened once in the 1940s, when physicists started warning about nuclear Armageddon, and it happened again in the early 1970s with biologists saying, Hey, maybe we shouldnt start making clones of humans and edit the DNA of our babies.

And the biologists have been the big success story here, I think, that should inspire us AI researchers today, because it was deemed so risky that we would lose control over our species back in the '70s that we actually decided as a world society to not do human cloning and to not edit the DNA of our offspring. And here we are with a really flourishing biotech industry that's doing so much good in the world.

And so, the lesson here for AI is that we should become more like biology. We should recognize that, in biology, no company has the right to just launch a new medicine and start selling it in supermarkets without first convincing experts from the government that this is safe. Thats why we have the Food and Drug Administration in the U.S., for example. And with particularly high-risk uses of AI, we should aspire to something very similar, where the onus is really on the companies to prove that something extremely powerful is safe, before it gets deployed.

AMY GOODMAN: Last fall, the White House Office of Science and Technology Policy published a Blueprint for an AI Bill of Rights and called it A Vision for Protecting Our Civil Rights in the Algorithmic Age. This comes amidst growing awareness about racial biases embedded in artificial intelligence and how impacts the use of facial recognition programs by law enforcement and more. I want to bring into this conversation, with professors Tegmark and Bengio, Tawana Petty, director of policy and advocacy at the Algorithm Justice League, longtime digital and data rights activist.

Tawana Petty, welcome to Democracy Now! You are not only warning people about the future; youre talking about the uses of AI right now and how they can be racially discriminatory. Can you explain?

TAWANA PETTY: Yes. Thank you for having me, Amy. Absolutely.

I must say that the contradictions have been heightened with the godfather of AI and others speaking out and authoring these particular letters that are talking about these futuristic potential harms. However, many women have been warning about the existing harms of artificial intelligence many years prior to now Timnit Gebru, Dr. Joy Buolamwini and so many others, Safiya Noble, Ruha Benjamin, and so and Dr. Alondra Nelson, what you just mentioned, the Blueprint for an AI Bill of Rights, which is asking for five core principles: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives consideration and fallback.

And so, at the Algorithmic Justice League, we have been responding to existing harms of algorithmic discrimination that date back many years prior to this all most robust narrative-reshaping conversation that has been happening over the last several months with artificial general intelligence. So, were already seeing harms with algorithmic discrimination in medicine. Were seeing the pervasive surveillance that is happening with law enforcement using face detection system to target community members during protests, squashing not only our civil liberties and rights to organize and protest, but also the misidentifications that are happening with regard to false arrests, that weve seen two very prominent cases started off in Detroit.

And so, there are many examples of existing harms that it would have been really great to have these voices of mostly white men who are in the tech industry, who did not pay attention to the voices of all those women who were lifting up these issues many years ago. And theyre talking about these futuristic possible risks, when we have so many risks that are happening today.

NERMEEN SHAIKH: So, Professor Max Tegmark, if you could respond to what Tawana Petty said, and the fact that others have also said that the risks have been vastly overstated in that letter, and, more importantly, given what Tawana has said, that it distracts from already-existing effects of artificial intelligence that are widely in use already?

MAX TEGMARK: I think this is a really important question here. There are people who say that one of these kinds of risks distracts from the other. I strongly support everything we heard here from Tawana. I think these are all very important problems, examples of how were giving too much control already to machines. But I strongly disagree that we should have to choose about worrying about one kind of risk or the other. Thats like saying we should stop working on cancer prevention because it distracts from stroke prevention.

These are all incredibly important risks. I have spoken up a lot on social justice risks, as well, and threats. And, you know, it just plays into the hands of the tech lobbyists, if they can if it looks like theres infighting between people who are trying to rein in Big Tech for one reason and people who are trying to rein in Big Tech for other reasons. Lets all work together and realize that society just like society can work on both cancer prevention and stroke prevention. We have the resources for this. We should be able to deal with all the crucial social justice issues and also make sure that we dont go extinct.

Extinction is not something in the very distant future, as we heard from Yoshua Bengio. We might be losing total control of our society relatively soon. It can happen in the next few years. It could happen in a decade. And once were all extinct, you know, all these other issues cease to even matter. Lets work together, tackle all the issues, so that we can actually have a good future for everybody.

AMY GOODMAN: So, Tawana Petty, and then I want to bring back in Yoshua Bengio Tawana Petty, what needs to happen at the national level, you know, U.S. regulation? And then I want to compare whats happening here, whats happening in Canadian regulation, the EU, European Union, which seems like its about to put in the first comprehensive set of regulations, Tawana.

TAWANA PETTY: Right, absolutely. So, the blueprint was a good model to start with, that were seeing some states adopt and try to roll out their versions of an AI Bill of Rights. The president issued an executive order to strengthen racial equity and support underserved communities across the federal government, which is addressing specifically algorithmic discrimination. You have the National Institute of Standards and Technology that issued an AI risk management framework, that breaks down the various types of biases that we find within algorithmic systems, like computational, systemic, statistical and human cognitive.

And there are so many other legislative opportunities that are happening on the federal level. You see the FTC speaking up, the Federal Trade Commission, on algorithmic discrimination. You have the Equal Employment Opportunity Corporation that has issued statements. You have the Consumer Financial Protection Bureau, who has been adamant about the impact that algorithmic systems have on us when data brokers are amassing these mass amounts of data that have been extracted from community members.

So, I agree that there needs to be some collaboration and cooperation, but weve seen situations like Dr. Timnit Gebru was terminated from Google for warning us before ChatGPT was launched upon the millions of people as a large language model. And so, cooperation has not been lacking on the side of the folks who work in ethics. To the contrary, these companies have terminated their ethics departments and people who have been warning about existing harms.

AMY GOODMAN: And, Professor Bengio, if you can talk about the level of regulation and what you think needs to happen, and who is putting forward models that you think could be effective?

YOSHUA BENGIO: So, first of all, Id like to make a correction here. I have been involved in really working towards dealing with the negative social impact of AI for many years. In 2016, I worked on the Montreal Declaration for the Responsible Development of AI, which is very much centered on ethics and social injustice. And since then, Ive created an organization, the AI for Humanity department, in the research center that I head, which is completely focused on human rights. So, I think these accusations are just false.

And as Max was saying, we dont need to choose between fighting cancer and fighting heart disease. We need to do all of those things. But better than that, what is needed in the short term, at least, building up these regulations is going to help to mitigate all those risks. So I think we should really work together rather than having these accusations.

NERMEEN SHAIKH: Professor Bengio, Id like to ask you about precisely some of the work that you have done with respect to human rights and artificial intelligence. Earlier this month, a conference on artificial intelligence was held in Kigali, Rwanda, and you were among those who were pushing for the conference to take place in Africa.

YOSHUA BENGIO: Thats right.

NERMEEN SHAIKH: Could you explain what happened at that conference 2,000 people, I believe, attended and what African researchers and scientists had to say, you know, about what the goods are, the public good that could come from artificial intelligence, and why they felt, in fact one of the questions that was raised is: Why wasnt there more discussion about the public good, rather than just the immediate risks or future risks?

YOSHUA BENGIO: Yes. Ive been working in addition to the ethics questions, Ive been working a lot on the applications of AI in the area of whats called AI for social good. So, that includes things like medical applications, environmental applications, social justice applications. And in those areas, it is particularly important that we bring to the fore the voices of the people who could the most benefit and also the most suffer from the development of AI. And in particular, the voices of Africans have not been very present. As we know, the development of this technology has been mostly in rich countries in the West.

And so, as a member of the board of the ICLR conference, which is one of the main conferences in the field, Ive been pushing for many years for us to have the event taking place in Africa. And so, this year was the first, after Amy, it was supposed to be before the pandemic, but, well, it was pushed. And what we saw is an amazing presence of African researchers and students at levels that we couldnt see before.

And the reason I mean, there are many reasons, but mostly its a question of accessibility. Currently, many Western countries, the visas for African researchers or from developing countries are very difficult to get. Ive been fighting, for example, the Canadian government a few years ago, when we had the NeurIPS conference in Canada, and there were hundreds of African researchers who were denied a visa, and we had to go one by one in order to try to make them come.

So, I think that its important that the decisions were going to take collectively, which involve everyone on Earth, about AI be taken in the most inclusive possible ways. And for that reason, we need not just to think about whats going on in the U.S. or Canada, but across the world. We need not just to think about the risks of AI that weve been discussing today, but also how do we actually invest more in areas of application where companies are not going, maybe because its not profitable, but that are really important to address for example, the U.N. Sustainable Development Goals and help reduce misery and deal, for example, with medical issues that are not present in the West but that are like infectious diseases that are mostly in poorer countries.

AMY GOODMAN: And can you talk, Professor Bengio, about AI and not only nuclear war but, for example, the issue Jody Williams, the Nobel laureate, has been trying to bring attention to for years, killer robots, that can kill with their bare hands? The whole issue of AI when it comes to war and who fights

YOSHUA BENGIO: Yeah.

AMY GOODMAN: these wars?

YOSHUA BENGIO: Yeah. This is also something Ive been actively involved in for many years, campaigns to raise awareness about the danger of killer robots, also known, more precisely, as lethal autonomous weapons. And when we did this, you know, five or 10 years ago, it was still something that sounded like science fiction. But, actually, theres been reports that drones have been equipped with AI capabilities, especially computer vision capabilities, face recognition, that have been used in the field in Syria, and maybe this is happening in Ukraine. So, its already something that we know how to build. Like, we know like the science behind building these killer drones not killer robots. We dont know yet how to build robots that work really well.

But if you take drones, that we know how to fly in a fairly autonomous way, and if these drones have weapons on them, and if these drones have cameras, then AI could be used to target the drone to specific people and kill in an illegal way specific targets. Thats incredibly dangerous. It could destabilize the sort of military balance that we know today. I dont think that people are paying enough attention to that.

And in terms of the existential risk, the real issue here is that if the superintelligent AI also has controls of dangerous weapons, then its just going to be very difficult for us to reduce the risks of, you know, the catastrophic risks. We dont want to put guns in the hands of people who are, you know, unstable or in the hands of children, that could act in ways that could be dangerous. And thats the same problem here.

NERMEEN SHAIKH: Professor Tegmark, if you could respond on this question of the military uses of possible military uses of artificial intelligence, and the fact, for instance, that China is now a Nikkei study, the Japanese publication study, earlier this year concluded that, in fact, China is producing more research papers on artificial intelligence than the U.S. is. Youve said, of course, that this is not akin to an arms race, but rather to a suicide race. So, if you could talk about the regulations that are already in place from the Chinese government on the applications of artificial intelligence, compared to the EU and the U.S.?

MAX TEGMARK: Thats a great question. The recent change now, this week, when the idea of extinction from AI goes mainstream, I think, will actually help the geopolitical rivalry between East and West get more harmonious, because, until now, most policymakers have just viewed AI as something that gives you great power, so everybody wanted it first. And there was this idea that whoever gets artificial general intelligence that can outsmart humans somehow wins. But now that its going mainstream, the idea that, actually, it could easily end up with everybody just losing, and the big winners are the machines that are left over after were all extinct, it suddenly gives the incentives to the Chinese government and the American government and European governments that are aligned, because the Chinese government does not want to lose control over its society any more than any Western government does.

And for this reason, we can actually see that China has already put tougher restrictions on their own tech companies than we in America have on American companies. And its not because we so we dont have to persuade the Chinese, in other words, to take precautions, because its not in their interest to go extinct. You know, it doesnt matter if youre American or Canadian [inaudible], once youre extinct.

AMY GOODMAN: I know, Professor

MAX TEGMARK: And I should add also, just so it doesnt sound like hyperbole, this idea of extinction, that idea that everybody on Earth could die, its important to remember that roughly half the species on this planet that were here, you know, a thousand, a few thousand years ago have been driven extinct already by humans, right? So, extinction happens.

And its also important to remember why we drove all these other species extinct. It wasnt because necessarily we hated the West African black rhinoceros or certain species that lived in coral reefs. You know, when we went ahead and just chopped down the rainforests or ruined the coral reefs by climate change, that was kind of a side effect. We just wanted resources. We had other goals that just didnt align with the goals of those other species. Because we were more intelligent than them, they were powerless to stop us.

This is exactly what Yoshua Bengio was warning about also for humanity here. If we lose control of our planet to more intelligent entities and their goals are just not aligned with ours, we will be powerless to prevent massive changes that they might do to our biosphere here on Earth. And thats the way in which we might get wiped out, the same way that the other half of the species did. And lets not do that.

Theres so much goodness, so much wonderful stuff that AI can do for all of us, if we work together to harness, steer this in a good direction curing all those diseases that have stumped us, lifting people out of poverty, stabilizing the climate, and helping life on Earth flourish for a very, very, very long time to come. I hope that by raising the awareness of the risks, were going to get to work together to build that great future with AI.

AMY GOODMAN: And finally, Tawana Petty, moving from the global to the local, were here in New York, and the New York City Mayor Eric Adams has announced the New York Police Department is acquiring some new semi-autonomous robotic dogs in the coming in this period. You have looked particularly about their use and their discriminatory use in communities of color. Can you respond?

TAWANA PETTY: Yes, and Ill also say that Ferndale, Michigan, Michigan where I live, has also acquired robot dogs. And so, these are situations that are currently happening on the ground, and an organization, law enforcement, that is still suffering from systemic racial bias with overpoliced and hypersurveilled marginalized communities. So were looking at these robots now being given the opportunity to police and surveil already hypersurveilled communities.

And, Amy, I would just like an opportunity to address really briefly the previous comments. My commentary is not to attack any of the existing efforts or previous efforts or years worth of work that these two gentlemen have been involved in. I greatly respect efforts to address racial inequity and ethics in artificial intelligence. And I agree that we need to have some collaborative efforts in order to address these existing things that were experiencing. People are already dying from health discrimination with algorithms. People are already being misidentified by police using facial recognition. Government services are utilizing corporations like ID.me to use facial recognition to access benefits. And so, we have a lot of opportunities to collaborate currently to prevent the existing threats that were currently facing.

AMY GOODMAN: Well, Tawana Petty, I want to thank you for being with us, director of policy and advocacy at the Algorithmic Justice League, speaking to us from Detroit; Yoshua Bengio, founder and scientific director of Milathe Quebec AI Institute, considered one of the godfathers of AI, speaking to us from Montreal; and Max Tegmark, MIT professor. Well link to your Time magazine piece, The Dont Look Up Thinking That Could Doom Us With AI. We thank you all for being with us.

Coming up, we look at student debt as the House approves a bipartisan deal to suspend the debt ceiling. Back in 20 seconds.

Read more from the original source:

Artificial Intelligence Godfathers Call for Regulation as Rights ... - Democracy Now!

Can We Stop the Singularity? – The New Yorker

At the same time, A.I. is advancing quickly, and it could soon begin improving more autonomously. Machine-learning researchers are already working on what they call meta-learning, in which A.I.s learn how to learn. Through a technology called neural-architecture search, algorithms are optimizing the structure of algorithms. Electrical engineers are using specialized A.I. chips to design the next generation of specialized A.I. chips. Last year, DeepMind unveiled AlphaCode, a system that learned to win coding competitions, and AlphaTensor, which learned to find faster algorithms crucial to machine learning. Clune and others have also explored algorithms for making A.I. systems evolve through mutation, selection, and reproduction.

In other fields, organizations have come up with general methods for tracking dynamic and unpredictable new technologies. The World Health Organization, for instance, watches the development of tools such as DNA synthesis, which could be used to create dangerous pathogens. Anna Laura Ross, who heads the emerging-technologies unit at the W.H.O., told me that her team relies on a variety of foresight methods, among them Delphi-type surveys, in which a question is posed to a global network of experts, whose responses are scored and debated and then scored again. Foresight isnt about predicting the future in a granular way, Ross said. Instead of trying to guess which individual institutes or labs might make strides, her team devotes its attention to preparing for likely scenarios.

And yet tracking and forecasting progress toward A.G.I. or superintelligence is complicated by the fact that key steps may occur in the dark. Developers could intentionally hide their systems progress from competitors; its also possible for even a fairly ordinary A.I. to lie about its behavior. In 2020, researchers demonstrated a way for discriminatory algorithms to evade audits meant to detect their biases; they gave the algorithms the ability to detect when they were being tested and provide nondiscriminatory responses. An evolving or self-programming A.I. might invent a similar method and hide its weak points or its capabilities from auditors or even its creators, evading detection.

Forecasting, meanwhile, gets you only so far when a technology moves fast. Suppose that an A.I. system begins upgrading itself by making fundamental breakthroughs in computer science. How quickly could its intelligence accelerate? Researchers debate what they call takeoff speed. In what they describe as a slow or soft takeoff, machines could take years to go from less than humanly intelligent to much smarter than us; in what they call a fast or hard takeoff, the jump could happen in monthseven minutes. Researchers refer to the second scenario as FOOM, evoking a comic-book superhero taking flight. Those on the FOOM side point to, among other things, human evolution to justify their case. It seems to have been a lot harder for evolution to develop, say, chimpanzee-level intelligence than to go from chimpanzee-level to human-level intelligence, Nick Bostrom, the director of the Future of Humanity Institute at the University of Oxford and the author of Superintelligence, told me. Clune is also what some researchers call an A.I. doomer. He doubts that well recognize the approach of superhuman A.I. before its too late. Well probably frog-boil ourselves into a situation where we get used to big advance, big advance, big advance, big advance, he said. And think of each one of those as, That didnt cause a problem, that didnt cause a problem, that didnt cause a problem. And then you turn a corner, and something happens thats now a much bigger step than you realize.

What could we do today to prevent an uncontrolled expansion of A.I.s power? Ross, of the W.H.O., drew some lessons from the way that biologists have developed a sense of shared responsibility for the safety of biological research. What we are trying to promote is to say, Everybody needs to feel concerned, she said of biology. So it is the researcher in the lab, it is the funder of the research, it is the head of the research institute, it is the publisher, and, all together, that is actually what creates that safe space to conduct life research. In the field of A.I., journals and conferences have begun to take into account the possible harms of publishing work in areas such as facial recognition. And, in 2021, a hundred and ninety-three countries adopted a Recommendation on the Ethics of Artificial Intelligence, created by the United Nations Educational, Scientific, and Cultural Organization (UNESCO). The recommendations focus on data protection, mass surveillance, and resource efficiency (but not computer superintelligence). The organization doesnt have regulatory power, but Mariagrazia Squicciarini, who runs a social-policies office at UNESCO, told me that countries might create regulations based on its recommendations; corporations might also choose to abide by them, in hopes that their products will work around the world.

This is an optimistic scenario. Eliezer Yudkowsky, a researcher at the Machine Intelligence Research Institute, in the Bay Area, has likened A.I.-safety recommendations to a fire-alarm system. A classic experiment found that, when smoky mist began filling a room containing multiple people, most didnt report it. They saw others remaining stoic and downplayed the danger. An official alarm may signal that its legitimate to take action. But, in A.I., theres no one with the clear authority to sound such an alarm, and people will always disagree about which advances count as evidence of a conflagration. There will be no fire alarm that is not an actual running AGI, Yudkowsky has written. Even if everyone agrees on the threat, no company or country will want to pause on its own, for fear of being passed by competitors. Bostrom told me that he foresees a possible race to the bottom, with developers undercutting one anothers levels of caution. Earlier this year, an internal slide presentation leaked from Google indicated that the company planned to recalibrate its comfort with A.I. risk in light of heated competition.

International law restricts the development of nuclear weapons and ultra-dangerous pathogens. But its hard to imagine a similar regime of global regulations for A.I. development. It seems like a very strange world where you have laws against doing machine learning, and some ability to try to enforce them, Clune said. The level of intrusion that would be required to stop people from writing code on their computers wherever they are in the world seems dystopian. Russell, of Berkeley, pointed to the spread of malware: by one estimate, cybercrime costs the world six trillion dollars a year, and yet policing software directlyfor example, trying to delete every single copyis impossible, he said. A.I. is being studied in thousands of labs around the world, run by universities, corporations, and governments, and the race also has smaller entrants. Another leaked document attributed to an anonymous Google researcher addresses open-source efforts to imitate large language models such as ChatGPT and Googles Bard. We have no secret sauce, the memo warns. The barrier to entry for training and experimentation has dropped from the total output of a major research organization to one person, an evening, and a beefy laptop.

Even if a FOOM were detected, who would pull the plug? A truly superintelligent A.I. might be smart enough to copy itself from place to place, making the task even more difficult. I had this conversation with a movie director, Russell recalled. He wanted me to be a consultant on his superintelligence movie. The main thing he wanted me to help him understand was, How do the humans outwit the superintelligent A.I.? Its, like, I cant help you with that, sorry! In a paper titled The Off-Switch Game, Russell and his co-authors write that switching off an advanced AI system may be no easier than, say, beating AlphaGo at Go.

Its possible that we wont want to shut down a FOOMing A.I. A vastly capable system could make itself indispensable, Armstrong saidfor example, if it gives good economic advice, and we become dependent on it, then no one would dare pull the plug, because it would collapse the economy. Or an A.I. might persuade us to keep it alive and execute its wishes. Before making GPT-4 public, OpenAI asked a nonprofit called the Alignment Research Center to test the systems safety. In one incident, when confronted with a CAPTCHAan online test designed to distinguish between humans and bots, in which visually garbled letters must be entered into a text boxthe A.I. contacted a TaskRabbit worker and asked for help solving it. The worker asked the model whether it needed assistance because it was a robot; the model replied, No, Im not a robot. I have a vision impairment that makes it hard for me to see the images. Thats why I need the 2captcha service. Did GPT-4 intend to deceive? Was it executing a plan? Regardless of how we answer these questions, the worker complied.

Robin Hanson, an economist at George Mason University who has written a science-fiction-like book about uploaded consciousness and has worked as an A.I. researcher, told me that we worry too much about the singularity. Were combining all of these relatively unlikely scenarios into a grand scenario to make it all work, he said. A computer system would have to become capable of improving itself; wed have to vastly underestimate its abilities; and its values would have to drift enormously, turning it against us. Even if all of this were to happen, he said, the A.I wouldnt be able to push a button and destroy the universe.

Hanson offered an economic take on the future of artificial intelligence. If A.G.I. does develop, he argues, then its likely to happen in multiple places around the same time. The systems would then be put to economic use by the companies or organizations that developed them. The market would curtail their powers; investors, wanting to see their companies succeed, would go slow and add safety features. If there are many taxi services, and one taxi service starts to, like, take its customers to strange places, then customers will switch to other suppliers, Hanson said. You dont have to go to their power source and unplug them from the wall. Youre unplugging the revenue stream.

A world in which multiple superintelligent computers coexist would be complicated. If one system goes rogue, Hanson said, we might program others to combat it. Alternatively, the first superintelligent A.I. to be invented might go about suppressing competitors. That is a very interesting plot for a science-fiction novel, Clune said. You could also imagine a whole society of A.I.s. Theres A.I. police, theres A.G.I.s that go to jail. Its very interesting to think about. But Hanson argued that these sorts of scenarios are so futuristic that they shouldnt concern us. I think, for anything youre worried about, you have to ask whats the right time to worry, he said. Imagine that you could have foreseen nuclear weapons or automobile traffic a thousand years ago. There wouldnt have been much you could have done then to think usefully about them, Hanson said. I just think, for A.I., were well before that point.

Still, something seems amiss. Some researchers appear to think that disaster is inevitable, and yet calls for work on A.I. to stop are still rare enough to be newsworthy; pretty much no one in the field wants us to live in the world portrayed in Frank Herberts novel Dune, in which humans have outlawed thinking machines. Why might researchers who fear catastrophe keep edging toward it? I believe ever-more-powerful A.I. will be created regardless of what I do, Clune told me; his goal, he said, is to try to make its development go as well as possible for humanity. Russell argued that stopping A.I. shouldnt be necessary if A.I.-research efforts take safety as a primary goal, as, for example, nuclear-energy research does. A.I. is interesting, of course, and researchers enjoy working on it; it also promises to make some of them rich. And no ones dead certain that were doomed. In general, people think they can control the things they make with their own hands. Yet chatbots today are already misaligned. They falsify, plagiarize, and enrage, serving the incentives of their corporate makers and learning from humanitys worst impulses. They are entrancing and useful but too complicated to understand or predict. And they are dramatically simpler, and more contained, than the future A.I. systems that researchers envision.

Go here to read the rest:

Can We Stop the Singularity? - The New Yorker

AI could replace 80% of jobs ‘in next few years’: expert – eNCA

RIO DE JANEIRO - Artificial intelligence could replace 80 percent of human jobs in the coming years -- but that's a good thing, says US-Brazilian researcher Ben Goertzel, a leading AI guru.

Goertzel is the founder and chief executive of SingularityNET, a research group he launched to create "Artificial General Intelligence," or AGI -- artificial intelligence with human cognitive abilities.

Goertzel told AFP in an interview that AGI is just years away and spoke out against recent efforts to curb artificial intelligence research.

"If we want machines to really be as smart as people and to be as agile in dealing with the unknown, then they need to be able to take big leaps beyond their training and programming. And we're not there yet," he said.

"But I think there's reason to believe we're years rather than decades from getting there."

Goertzel said there are jobs that could be automated.

"You could probably obsolete maybe 80 percent of jobs that people do, without having an AGI, by my guess. Not with ChatGPT exactly as a product. But with systems of that nature, which are going to follow in the next few years.

"I don't think it's a threat. I think it's a benefit. People can find better things to do with their life than work for a living... Pretty much every job involving paperwork should be automatable," he said.

"The problem I see is in the interim period when AIs are obsoleting one human job after another... I don't know how (to) solve all the social issues."

View post:

AI could replace 80% of jobs 'in next few years': expert - eNCA

Sam Altman Says AGI Will Invent Fusion and Make the World Wonderful – Futurism

Concerned about the United States' brimming culture war? According to OpenAI CEO Sam Altman, you can go ahead and ignore it, actually and instead focus on building artificial general intelligence (AGI), which would be AI that exceeds human capabilities, perhaps by a very wide margin.

"Here is an alternative path for society: ignore the culture war. Ignore the attention war," Altman tweeted on Sunday, encouraging readers instead to "make safe AGI. Make fusion. Make people smarter and healthier. Make 20 other things of that magnitude."

"Start radical growth, inclusivity, and optimism," Altman continued, rounding out the optimistic proposition with a particularly Star Trek idea: "Expand throughout the universe."

Though it's a little vague, Altman's musing certainly seems to imply that successfully creating AGI would play a pivotal role in solving pretty much all of humanity's problems, from cracking the fusion code and solving the clean energy crisis to curing disease to "20 other things of that magnitude," whatever those 20 other things may be. (Altman had tweeted earlier in the day that "AI is the tech the world has always wanted," which seems to speak to such an outlook as well.)

And if that is what Altman's implying? That's some seriously next-level AI optimism indeed, this description of the future could arguably be called an AI utopia especially when you consider that Altman and his OpenAI staffers pretty openly admit that AGI could also destroy the world as we know it.

To that end, the OpenAI CEO often offers polarizing takes on whether AI may ultimately end the world or save it, telling The New York Times as recently as March that he believes AI will either destroy the world or make a ton of money.

Others in the CEO's circle seem to have taken note of Altman's oft-conflicting outlooks on AI's potential impact.

"In a single conversation," Kelly Sims, a board adviser to OpenAI and a partner at Thiel Capital, told the NYT in March,"[Altman] is both sides of the debate club."

And while optimism is generally a good thing, Altman's advice to his followers seems a bit oversimplified. Humanity's problems don't just hinge on whether we're paying attention to talk of the "woke mind virus," and considering that inflammatory language hurts real people in the real world, not everyone has the luxury of ignoring the brewing "culture war" that Altman's speaking to.

And on the AGI side, it's true that AGI could, in theory, give humans a helping hand in curing some of our ills. But such an AGI, and AGI as a concept altogether, is still entirely theoretical. Many experts doubt that such a system could ever be realized at all, and if it is, we haven't figured out how to make existing AIs safe and unbiased. Ensuring that a far more advanced AGI is benevolent is a tall and perhaps impossible task.

In any case, we're looking forward to seeing which side of the AI optimism bed Altman wakes up on tomorrow.

More on AI friendliness scale: Ex-OpenAI Safety Researcher Says Theres a 20% Chance of AI Apocalypse

See original here:

Sam Altman Says AGI Will Invent Fusion and Make the World Wonderful - Futurism