Archive for the ‘Artificial General Intelligence’ Category

63% of surveyed Americans want government legislation to prevent super intelligent AI from ever being achieved – PC Gamer

Generative AI may well be en vogue right now, but when it comes to artificial intelligence systems that are way more capable than humans, the jury is definitely unanimous in its view. A survey of American voters showed that 63% of respondents believe government regulations should be put in place to actively prevent it from ever being achieved, let alone be restricted in some way.

The survey, carried out by YouGov for the Artificial Intelligence Policy Institute (via Vox) took place last September. While it only sampled a small number of voters in the USjust 1,118 in totalthe demographics covered were broad enough to be fairly representative of the wider voting population.

One of the specific questions asked in the survey focused on "whether regulation should have the goal of delaying super intelligence." Specifically, it's talking about artificial general intelligence (AGI), something that the likes of OpenAI and Google are actively working on trying to achieve. In the case of the former, its mission expressly states this, with the goal of "ensur[ing] that artificial general intelligence benefits all of humanity" and it's a view shared by those working in the field. Even if that is one of the co-founders of OpenAI on his way out of the door...

Regardless of how honourable OpenAI's intentions are, or maybe were, it's a message that's currently lost on US voters. Of those surveyed, 63% agreed with the statement that regulation should aim to actively prevent AI superintelligence, 21% felt that didn't know, and 16% disagreed altogether.

The survey's overall findings suggest that voters are significantly more worried about keeping "dangerous [AI] models out of the hands of bad actors" rather than it being of benefit to us all. Research into new, more powerful AI models should be regulated, according to 67% of the surveyed voters, and they should be restricted in what they're capable of. Almost 70% of respondents felt that AI should be regulated like a "dangerous powerful technology."

That's not to say those people weren't against learning about AI. When asked about a proposal in Congress that expands access to AI education, research, and training, 55% agreed with the idea, whereas 24% opposed it. The rest chose that "Don't know" response.

I suspect that part of the negative view of AGI is the average person will undoubtedly think 'Skynet' when questioned about artificial intelligence better than humans. Even with systems far more basic than that, concerns over deep fakes and job losses won't help with seeing any of the positives that AI can potentially bring.

Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.

The survey's results will no doubt be pleasing to the Artificial Intelligence Policy Institute, as it "believe[s] that proactive government regulation can significantly reduce the destabilizing effects from AI." I'm not suggesting that it's influenced the results in any way, as my own, very unscientific, survey of immediate friends and family produced a similar outcomei.e. AGI is dangerous and should be heavily controlled.

Regardless of whether this is true or not, OpenAI, Google, and others clearly have lots of work ahead of them, in convincing voters that AGI really is beneficial to humanity. Because at the moment, it would seem that the majority view of AI becoming more powerful is an entirely negative one, despite arguments to the contrary.

See original here:

63% of surveyed Americans want government legislation to prevent super intelligent AI from ever being achieved - PC Gamer

Tags:

"I lost trust": Why the OpenAI team in charge of safeguarding humanity imploded – Vox.com

Editors note, May 17, 2024, 11:45 pm ET: This story has been updated to include a post-publication statement that another Vox reporter received from OpenAI.

For months, OpenAI has been losing employees who care deeply about making sure AI is safe. Now, the company is positively hemorrhaging them.

Ilya Sutskever and Jan Leike announced their departures from OpenAI, the maker of ChatGPT, on Tuesday. They were the leaders of the companys superalignment team the team tasked with ensuring that AI stays aligned with the goals of its makers, rather than acting unpredictably and harming humanity.

Theyre not the only ones whove left. Since last November when OpenAIs board tried to fire CEO Sam Altman only to see him quickly claw his way back to power at least five more of the companys most safety-conscious employees have either quit or been pushed out.

Whats going on here?

If youve been following the saga on social media, you might think OpenAI secretly made a huge technological breakthrough. The meme What did Ilya see? speculates that Sutskever, the former chief scientist, left because he saw something horrifying, like an AI system that could destroy humanity.

But the real answer may have less to do with pessimism about technology and more to do with pessimism about humans and one human in particular: Altman. According to sources familiar with the company, safety-minded employees have lost faith in him.

Its a process of trust collapsing bit by bit, like dominoes falling one by one, a person with inside knowledge of the company told me, speaking on condition of anonymity.

Not many employees are willing to speak about this publicly. Thats partly because OpenAI is known for getting its workers to sign offboarding agreements with non-disparagement provisions upon leaving. If you refuse to sign one, you give up your equity in the company, which means you potentially lose out on millions of dollars.

Not many employees are willing to speak about this publicly. Thats partly because OpenAI is known for getting its workers to sign offboarding agreements with non-disparagement provisions upon leaving. If you refuse to sign one, you give up your equity in the company, which means you potentially lose out on millions of dollars.

(OpenAI did not respond to a request for comment in time for publication. After publication of my colleague Kelsey Pipers piece on OpenAIs post-employment agreements, OpenAI sent her a statement noting, We have never canceled any current or former employees vested equity nor will we if people do not sign a release or nondisparagement agreement when they exit. When Piper asked if this represented a change in policy, as sources close to the company had indicated to her, OpenAI replied: This statement reflects reality.)

One former employee, however, refused to sign the offboarding agreement so that he would be free to criticize the company. Daniel Kokotajlo, who joined OpenAI in 2022 with hopes of steering it toward safe deployment of AI, worked on the governance team until he quit last month.

OpenAI is training ever-more-powerful AI systems with the goal of eventually surpassing human intelligence across the board. This could be the best thing that has ever happened to humanity, but it could also be the worst if we dont proceed with care, Kokotajlo told me this week.

OpenAI says it wants to build artificial general intelligence (AGI), a hypothetical system that can perform at human or superhuman levels across many domains.

I joined with substantial hope that OpenAI would rise to the occasion and behave more responsibly as they got closer to achieving AGI. It slowly became clear to many of us that this would not happen, Kokotajlo told me. I gradually lost trust in OpenAI leadership and their ability to responsibly handle AGI, so I quit.

And Leike, explaining in a thread on X why he quit as co-leader of the superalignment team, painted a very similar picture Friday. I have been disagreeing with OpenAI leadership about the companys core priorities for quite some time, until we finally reached a breaking point, he wrote.

OpenAI did not respond to a request for comment in time for publication.

To get a handle on what happened, we need to rewind to last November. Thats when Sutskever, working together with the OpenAI board, tried to fire Altman. The board said Altman was not consistently candid in his communications. Translation: We dont trust him.

The ouster failed spectacularly. Altman and his ally, company president Greg Brockman, threatened to take OpenAIs top talent to Microsoft effectively destroying OpenAI unless Altman was reinstated. Faced with that threat, the board gave in. Altman came back more powerful than ever, with new, more supportive board members and a freer hand to run the company.

When you shoot at the king and miss, things tend to get awkward.

Publicly, Sutskever and Altman gave the appearance of a continuing friendship. And when Sutskever announced his departure this week, he said he was heading off to pursue a project that is very personally meaningful to me. Altman posted on X two minutes later, saying that this is very sad to me; Ilya is a dear friend.

Yet Sutskever has not been seen at the OpenAI office in about six months ever since the attempted coup. He has been remotely co-leading the superalignment team, tasked with making sure a future AGI would be aligned with the goals of humanity rather than going rogue. Its a nice enough ambition, but one thats divorced from the daily operations of the company, which has been racing to commercialize products under Altmans leadership. And then there was this tweet, posted shortly after Altmans reinstatement and quickly deleted:

So, despite the public-facing camaraderie, theres reason to be skeptical that Sutskever and Altman were friends after the former attempted to oust the latter.

And Altmans reaction to being fired had revealed something about his character: His threat to hollow out OpenAI unless the board rehired him, and his insistence on stacking the board with new members skewed in his favor, showed a determination to hold onto power and avoid future checks on it. Former colleagues and employees came forward to describe him as a manipulator who speaks out of both sides of his mouth someone who claims, for instance, that he wants to prioritize safety, but contradicts that in his behaviors.

For example, Altman was fundraising with autocratic regimes like Saudi Arabia so he could spin up a new AI chip-making company, which would give him a huge supply of the coveted resources needed to build cutting-edge AI. That was alarming to safety-minded employees. If Altman truly cared about building and deploying AI in the safest way possible, why did he seem to be in a mad dash to accumulate as many chips as possible, which would only accelerate the technology? For that matter, why was he taking the safety risk of working with regimes that might use AI to supercharge digital surveillance or human rights abuses?

For employees, all this led to a gradual loss of belief that when OpenAI says its going to do something or says that it values something, that that is actually true, a source with inside knowledge of the company told me.

That gradual process crescendoed this week.

The superalignment teams co-leader, Jan Leike, did not bother to play nice. I resigned, he posted on X, mere hours after Sutskever announced his departure. No warm goodbyes. No vote of confidence in the companys leadership.

Other safety-minded former employees quote-tweeted Leikes blunt resignation, appending heart emojis. One of them was Leopold Aschenbrenner, a Sutskever ally and superalignment team member who was fired from OpenAI last month. Media reports noted that he and Pavel Izmailov, another researcher on the same team, were allegedly fired for leaking information. But OpenAI has offered no evidence of a leak. And given the strict confidentiality agreement everyone signs when they first join OpenAI, it would be easy for Altman a deeply networked Silicon Valley veteran who is an expert at working the press to portray sharing even the most innocuous of information as leaking, if he was keen to get rid of Sutskevers allies.

The same month that Aschenbrenner and Izmailov were forced out, another safety researcher, Cullen OKeefe, also departed the company.

And two weeks ago, yet another safety researcher, William Saunders, wrote a cryptic post on the EA Forum, an online gathering place for members of the effective altruism movement, who have been heavily involved in the cause of AI safety. Saunders summarized the work hes done at OpenAI as part of the superalignment team. Then he wrote: I resigned from OpenAI on February 15, 2024. A commenter asked the obvious question: Why was Saunders posting this?

No comment, Saunders replied. Commenters concluded that he is probably bound by a non-disparagement agreement.

Putting all of this together with my conversations with company insiders, what we get is a picture of at least seven people who tried to push OpenAI to greater safety from within, but ultimately lost so much faith in its charismatic leader that their position became untenable.

I think a lot of people in the company who take safety and social impact seriously think of it as an open question: is working for a company like OpenAI a good thing to do? said the person with inside knowledge of the company. And the answer is only yes to the extent that OpenAI is really going to be thoughtful and responsible about what its doing.

With Leike no longer there to run the superalignment team, OpenAI has replaced him with company co-founder John Schulman.

But the team has been hollowed out. And Schulman already has his hands full with his preexisting full-time job ensuring the safety of OpenAIs current products. How much serious, forward-looking safety work can we hope for at OpenAI going forward?

Probably not much.

The whole point of setting up the superalignment team was that theres actually different kinds of safety issues that arise if the company is successful in building AGI, the person with inside knowledge told me. So, this was a dedicated investment in that future.

Even when the team was functioning at full capacity, that dedicated investment was home to a tiny fraction of OpenAIs researchers and was promised only 20 percent of its computing power perhaps the most important resource at an AI company. Now, that computing power may be siphoned off to other OpenAI teams, and its unclear if therell be much focus on avoiding catastrophic risk from future AI models.

To be clear, this does not mean the products OpenAI is releasing now like the new version of ChatGPT, dubbed GPT-4o, which can have a natural-sounding dialogue with users are going to destroy humanity. But whats coming down the pike?

Its important to distinguish between Are they currently building and deploying AI systems that are unsafe? versus Are they on track to build and deploy AGI or superintelligence safely? the source with inside knowledge said. I think the answer to the second question is no.

Leike expressed that same concern in his Friday thread on X. He noted that his team had been struggling to get enough computing power to do its work and generally sailing against the wind.

Most strikingly, Leike said, I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics. These problems are quite hard to get right, and I am concerned we arent on a trajectory to get there.

When one of the worlds leading minds in AI safety says the worlds leading AI company isnt on the right trajectory, we all have reason to be concerned.

Yes, I'll give $5/month

Yes, I'll give $5/month

We accept credit card, Apple Pay, and Google Pay. You can also contribute via

Here is the original post:

"I lost trust": Why the OpenAI team in charge of safeguarding humanity imploded - Vox.com

Tags:

Top OpenAI researcher resigns, saying company prioritized ‘shiny products’ over AI safety – Fortune

Jan Leike, OpenAIs head of alignment whose team focused on AI safety, has resigned from the company, saying that over the past years, safety culture and processes have taken a backseat to shiny products.

In a post on X, the former Twitter, Leike added that he had been disagreeing with OpenAI leadership about the companys core priorities for quite some time, until we reached a breaking point.

OpenAI is shouldering an enormous responsibility on behalf of humanity, he continued.We are getting long overdue in getting incredibly serious about the implications of AGI [artificial general intelligence.

Leikes resignation comes just a couple of days after his co-lead on OpenAIs Superalignment team, chief scientist Ilya Sutskever, announced he was leaving the company. In his post announcing his departure, Sustskever wrote that he was confident that OpenAI will build AGI that is both safe and beneficial.

Both Leike and Sutskevers departures come after months of speculation about what happened in November 2023, when OpenAI nonprofit board fired CEO Sam Altman and removed president Greg Brockman as chairman. Even after Altman was reinstated to his role as CEO and to a position on the board, it was clear that issues around the safety of the AI OpenAI is building was a point of contention among members of the board and others focused on AI safety within the company. After Altman was reinstated, Sutskever seemed to disappear, with many wondering whether he had been ousted.

Today, Bloomberg reported that OpenAI has dissolved Leike and Sutskevers Superalignment team, which will be folded into broaderresearch efforts at the company.

At the end of his post thread on X, Leike spoke directly to OpenAI employees: To all OpenAI employees, I want to say: Learn to feel the AGI. Act with the gravitas appropriate for what youre doing. I believe you can ship the cultural change thats needed. I am counting on you.

Read the original:

Top OpenAI researcher resigns, saying company prioritized 'shiny products' over AI safety - Fortune

Tags:

The revolution in artificial intelligence and artificial general intelligence – Washington Times

OPINION:

A version of this story appeared in the daily Threat Status newsletter from The Washington Times. Click here to receive Threat Status delivered directly to your inbox each weekday.

We are on the edge of two revolutions, which will overwhelm virtually everything currently covered by the media.

Artificial intelligence is the development of massive computational capabilities for understanding and managing specific activities. For example, the air traffic control system already relies heavily on AI to match up the four-dimensional process of moving aircraft around the world. An aircraft carrier battle group has extensive AI in its defensive system. Israels Iron Dome anti-missile system relies heavily on computational analysis and decision-making, in virtual real time, to decide which incoming projectiles and drones are likely to hit populated areas and which can be safely ignored to focus on the gravest threats.

In health care, AI is increasingly capable of evaluating diagnostic information and CT scans, MRIs and other tests. If it had been properly used, AI could have dramatically improved our understanding of and response to COVID-19. Unfortunately, the public health service in general and the Centers for Disease Control and Prevention in particular are obsolete bureaucratic systems incapable of adapting to modern technology. Americans pay with their health and their lives for the refusal of these bureaucracies to modernize.

These are examples of the ways in which AI is already affecting our lives. It is getting faster, more comprehensive, and more capable of learning from its mistakes and improving through repetitive use.

Artificial general intelligence, or AGI, is a dramatically more powerful theoretical system. Some people argue it may be unattainable. Essentially, AGI would be a system that could constantly learn and evolve without being limited to one particular area or topic. It would be a constantly evolving and self-improving system. At least in theory, it could outthink humans and even compete with them. There is a consensus that AGI is still years away while AI is around us constantly improving in speed and capability.

As it improves, AI is going to transform our way of doing things on a scale that resembles the combination of electricity, chemistry and internal combustion engines around 1880.

No one in 1880 could have forecast the scale and breadth of change coming although a few futurist novelists such as Jules Verne and H.G. Wells wrote fascinating fictional forecasts of the coming scientific and technological revolution.

No one in 1880 could have foreseen that electric lights would eliminate night. Farmers used to work from light to dark, and then Thomas Edison made dark obsolete.

No one at the peak of vaudeville could have imagined its replacement by movies, radio, and then television (Steve Allens The Funny Men is a remarkable outline of that process and its impact on comedians and their work).

My favorite example of the unimaginable scale of change is the 1894 London Times story about the Horse Manure Crisis. London and New York had so many horses that their daily production of horse manure threatened to use up all the vacant lots in the two cities.

It did not occur to anyone in 1894 that in a few short years, Henry Ford would begin to eliminate horse manure as an urban problem by giving it a new problem in cars, trucks and buses.

In the early 1950s, there were 58,000 cases of polio annually. In 1953, Dr. Jonas Salk tested a polio vaccine on himself and his family and in 1955, the polio vaccine was tested on 1.6 million children in Canada, Finland and the United States. This is inconceivable with todays Food and Drug Administration rules, which prefer the certainty of disease over risks from cures.

We are at the same moment of dramatic change that Thomas Kuhn described in The Structure of Scientific Revolutions and called a paradigm shift.

The challenge will be to understand AIs potential (putting off applying AGI until it is developed) and then reimagine the way the world works with these powerful new tools.

The key is to leap into the future and have the kind of imagination Verne and Wells showed in imagining the future for their generation.

The first instinct will be to apply AI to marginally improve existing bureaucracies, processes and activities.

It will take a great leap of imagination to fully explore what AI could achieve if we redesigned our systems and habits around the capabilities it will make available to improve our lives, increase our productivity, and enhance our range of choices.

We are at the edge of an enormous opportunity.

For more commentary from Newt Gingrich, visit Gingrich360.com. Also, subscribe to the Newts World podcast.

View original post here:

The revolution in artificial intelligence and artificial general intelligence - Washington Times

Tags:

OpenAI disbands team devoted to artificial intelligence risks – Yahoo! Voices

OpenAI on Friday confirmed that it has disbanded a team devoted to mitigating the long-term dangers of super-smart artificial intelligence.

OpenAI weeks ago began dissolving the so-called "superalignment" group, integrating members into other projects and research, according to the San Francisco-based firm.

Company co-founder Ilya Sutskever and team co-leader Jan Leike announced their departures from the ChatGPT-maker this week.

The dismantling of an OpenAI team focused on keeping sophisticated artificial intelligence under control comes as such technology faces increased scrutiny from regulators and fears mount regarding its dangers.

"OpenAI must become a safety-first AGI (artificial general intelligence) company," Leike wrote Friday in a post on X, formerly Twitter.

Leike called on all OpenAI employees to "act with the gravitas" warranted by what they are building.

OpenAI chief executive Sam Altman responded to Leike's post with one of his own, thanking him for his work at the company and saying he was sad to see Leike leave.

"He's right we have a lot more to do," Altman said. "We are committed to doing it."

Altman promised more on the topic in the coming days.

Sutskever said on X that he was leaving after almost a decade at OpenAI, whose "trajectory has been nothing short of miraculous."

"I'm confident that OpenAI will build AGI that is both safe and beneficial," he added, referring to computer technology that seeks to perform as well as -- or better than -- human cognition.

Sutskever, OpenAI's chief scientist, sat on the board that voted to remove fellow chief executive Altman in November last year.

The ousting threw the San Francisco-based startup into a tumult, with the OpenAI board hiring Altman back a few days later after staff and investors rebelled.

OpenAI early this week released a higher-performing and even more human-like version of the artificial intelligence technology that underpins ChatGPT, making it free to all users.

"It feels like AI from the movies," Altman said in a blog post.

Altman has previously pointed to the Scarlett Johansson character in the movie "Her," where she voices an AI-based virtual assistant dating a man, as an inspiration for where he would like AI interactions to go.

The day will come when "digital brains will become as good and even better than our own," Sutskever said during a talk at a TED AI summit in San Francisco late last year.

"AGI will have a dramatic impact on every area of life."

gc/bjt

Go here to read the rest:

OpenAI disbands team devoted to artificial intelligence risks - Yahoo! Voices

Tags: