Archive for the ‘Artificial General Intelligence’ Category

OpenAI disbands safety team focused on risk of artificial intelligence causing ‘human extinction’ – New York Post

OpenAI eliminated a team focused on the risks posed by advanced artificial intelligence less than a year after it was formed and a departing executive warned Friday that safety has taken a backseat to shiny products at the company.

The Microsoft-backed ChatGPT maker disbanded its so-called Superalignment, which was tasked with creating safety measures for advanced general intelligence (AGI) systems that could lead to the disempowerment of humanity or even human extinction, according to a blog post last July.

The teams dissolution, which was first reported by Wired, came just days after OpenAI executives Ilya Sutskever and Jan Leike announced their resignations from the Sam Altman-led company.

OpenAI is shouldering an enormous responsibility on behalf of all of humanity, Leike wrote in a series of X posts on Friday. But over the past years, safety culture and processes have taken a backseat to shiny products. We are long overdue in getting incredibly serious about the implications of AGI.

Sutskever and Leike, who headed the OpenAIs safety team, quit shortly after the company unveiled an updated version of ChatGPT that was capable of holding conversations and translating languages for users in real time.

The mind-bending reveal drew immediate comparisons to the 2013 sci-fi film Her, which features a superintelligent AI portrayed by actress Scarlett Johannson.

When reached for comment, OpenAI referred to Altmans tweet in response to Leikes thread.

Im super appreciative of @janleikes contributions to OpenAIs alignment research and safety culture, and very sad to see him leave, Altman said. Hes right we have a lot more to do; we are committed to doing it. Ill have a longer post in the next couple of days.

Some members of the safety team are being reassigned to other parts of the company, CNBC reported, citing a person familiar with the situation.

AGI broadly defines AI systems that have cognitive abilities that are equal or superior to humans.

In its announcement regarding the safety teams formation last July, OpenAI said it was dedicating 20% of its available computing power toward long-term safety measures and hoped to solve the problem within four years.

Sutskever gave no indication of the reasons that led to his departure in his own X post on Tuesday though he acknowledged he was confident that OpenAI will build [AGI] that is both safe and beneficial under Altman and the firms other leads.

Sutskever was notably one of four OpenAI board members who participated in a shocking move to oust Altman from the company last fall. The coup sparked a governance crisis that nearly toppled OpenAI.

OpenAI eventually welcomed Altman back as CEO and unveiled a revamped board of directors.

A subsequent internal review cited a breakdown in trust between the prior Board and Mr. Altman ahead of his firing.

Investigators also concluded that the leadership spat was not related to the safety or security of OpenAIs advanced AI research or the pace of development, OpenAIs finances, or its statements to investors, customers, or business partners, according to a release in March.

Read the original here:

OpenAI disbands safety team focused on risk of artificial intelligence causing 'human extinction' - New York Post

OpenAI disbands team devoted to artificial intelligence risks – Moore County News Press

OpenAI on Friday confirmed that it has disbanded a team devoted to mitigating the long-term dangers of super-smart artificial intelligence.

OpenAI weeks ago began dissolving the so-called "superalignment" group, integrating members into other projects and research, according to the San Francisco-based firm.

Company co-founder Ilya Sutskever and team co-leader Jan Leike announced their departures from the ChatGPT-maker this week.

The dismantling of an OpenAI team focused on keeping sophisticated artificial intelligence under control comes as such technology faces increased scrutiny from regulators and fears mount regarding its dangers.

"OpenAI must become a safety-first AGI (artificial general intelligence) company," Leike wrote Friday in a post on X, formerly Twitter.

Leike called on all OpenAI employees to "act with the gravitas" warranted by what they are building.

OpenAI chief executive Sam Altman responded to Leike's post with one of his own, thanking him for his work at the company and saying he was sad to see Leike leave.

"He's right we have a lot more to do," Altman said. "We are committed to doing it."

Altman promised more on the topic in the coming days.

Sutskever said on X that he was leaving after almost a decade at OpenAI, whose "trajectory has been nothing short of miraculous."

"I'm confident that OpenAI will build AGI that is both safe and beneficial," he added, referring to computer technology that seeks to perform as well as -- or better than -- human cognition.

Sutskever, OpenAI's chief scientist, sat on the board that voted to remove fellow chief executive Altman in November last year.

The ousting threw the San Francisco-based startup into a tumult, with the OpenAI board hiring Altman back a few days later after staff and investors rebelled.

OpenAI early this week released a higher-performing and even more human-like version of the artificial intelligence technology that underpins ChatGPT, making it free to all users.

"It feels like AI from the movies," Altman said in a blog post.

Altman has previously pointed to the Scarlett Johansson character in the movie "Her," where she voices an AI-based virtual assistant dating a man, as an inspiration for where he would like AI interactions to go.

The day will come when "digital brains will become as good and even better than our own," Sutskever said during a talk at a TED AI summit in San Francisco late last year.

"AGI will have a dramatic impact on every area of life."

Read the rest here:

OpenAI disbands team devoted to artificial intelligence risks - Moore County News Press

Generative AI Is Totally Shameless. I Want to Be It – WIRED

AI has a lot of problems. It helps itself to the work of others, regurgitating what it absorbs in a game of multidimensional Mad Libs and omitting all attribution, resulting in widespread outrage and litigation. When it draws pictures, it makes the CEOs white, puts people in awkward ethnic outfits, and has a tendency to imagine women as elfish, with light-colored eyes. Its architects sometimes seem to be part of a death cult that semi-worships a Cthulu-like future AI god, and they focus great energies on supplicating to this immense imaginary demon (thrilling! terrifying!) instead of integrating with the culture at hand (boring, and you get yelled at). Even the more thoughtful AI geniuses seem OK with the idea that an artificial general intelligence is right around the corner, despite 75 years of failed precedentthe purest form of getting high on your own supply.

So I should reject this whole crop of image-generating, chatting, large-language-model-based code-writing infinite typing monkeys. But, dammit, I cant. I love them too much. I am drawn back over and over, for hours, to learn and interact with them. I have them make me lists, draw me pictures, summarize things, read for me. Where I work, weve built them into our code. Im in the bag. Not my first hypocrisy rodeo.

Theres a truism that helps me whenever the new big tech thing has every brain melting: I repeat to myself, Its just software. Word processing was going to make it too easy to write novels, Photoshop looked like it would let us erase history, Bitcoin was going to replace money, and now AI is going to ruin society, but its just software. And not even that much software: Lots of AI models could fit on a thumb drive with enough room left over for the entire run of Game of Thrones (or Microsoft Office). Theyre interdimensional ZIP files, glitchy JPEGs, but for all of human knowledge. And yet they serve such large portions! (Not always. Sometimes I ask the AI to make a list and it gives up. You can do it, I type. You can make the list longer. And it does! What a terrible interface!)

What I love, more than anything, is the quality that makes AI such a disaster: If it sees a space, it will fill itwith nonsense, with imagined fact, with links to fake websites. It possesses an absolute willingness to spout foolishness, balanced only by its carefree attitude toward plagiarism. AI is, very simply, a totally shameless technology.

As with most people on Earth, shame is a part of my life, installed at a young age and frequently updated with shame service packs. I read a theory once that shame is born when a child expects a reaction from their parentsa laugh, applauseand doesnt get it. Thats an oversimplification, but given all the jokes Ive told that have landed flat, it sure rings true. Social media could be understood, in this vein, as a vast shame-creating machine. We all go out there with our funny one-liners and cool pictures, and when no one likes or faves them we feel lousy about it. A healthy person goes, Ah well, didnt land. Felt weird. Time to move on.

AI is like having my very own shameless monster as a pet.

But when you meet shameless people they can sometimes seem like miracles. They have a superpower: the ability to be loathed, to be wrong, and yet to keep going. We obsess over themour divas, our pop stars, our former presidents, our political grifters, and of course our tech industry CEOs. We know them by their first names and nicknames, not because they are our friends but because the weight of their personalities and influence has allowed them to claim their own domain names in the collective cognitive register.

Are these shameless people evil, or wrong, or bad? Sure. Whatever you want. Mostly, though, theyre just big, by their own, shameless design. They contain multitudes, and we debate those multitudes. Do they deserve their fame, their billions, their Electoral College victory? We want them to go away but they dont care. Not one bit. They plan to stay forever. They will be dead before they feel remorse.

AI is like having my very own shameless monster as a pet. ChatGPT, my favorite, is the most shameless of the lot. It will do whatever you tell it to, regardless of the skills involved. Itll tell you how to become a nuclear engineer, how to keep a husband, how to invade a country. I love to ask it questions that Im ashamed to ask anyone else: What is private equity? How can I convince my family to let me get a dog? It helps me understand whats happening with my semaglutide injections. It helps me write codehas in fact renewed my relationship with writing code. It creates meaningless, disposable images. It teaches me music theory and helps me write crappy little melodies. It does everything badly and confidently. And I want to be it. I want to be that confident, that unembarrassed, that ridiculously sure of myself.

View original post here:

Generative AI Is Totally Shameless. I Want to Be It - WIRED

OpenAI disbands team devoted to artificial intelligence risks – Port Lavaca Wave

OpenAI on Friday confirmed that it has disbanded a team devoted to mitigating the long-term dangers of super-smart artificial intelligence.

OpenAI weeks ago began dissolving the so-called "superalignment" group, integrating members into other projects and research, according to the San Francisco-based firm.

Company co-founder Ilya Sutskever and team co-leader Jan Leike announced their departures from the ChatGPT-maker this week.

The dismantling of an OpenAI team focused on keeping sophisticated artificial intelligence under control comes as such technology faces increased scrutiny from regulators and fears mount regarding its dangers.

"OpenAI must become a safety-first AGI (artificial general intelligence) company," Leike wrote Friday in a post on X, formerly Twitter.

Leike called on all OpenAI employees to "act with the gravitas" warranted by what they are building.

OpenAI chief executive Sam Altman responded to Leike's post with one of his own, thanking him for his work at the company and saying he was sad to see Leike leave.

"He's right we have a lot more to do," Altman said. "We are committed to doing it."

Altman promised more on the topic in the coming days.

Sutskever said on X that he was leaving after almost a decade at OpenAI, whose "trajectory has been nothing short of miraculous."

"I'm confident that OpenAI will build AGI that is both safe and beneficial," he added, referring to computer technology that seeks to perform as well as -- or better than -- human cognition.

Sutskever, OpenAI's chief scientist, sat on the board that voted to remove fellow chief executive Altman in November last year.

The ousting threw the San Francisco-based startup into a tumult, with the OpenAI board hiring Altman back a few days later after staff and investors rebelled.

OpenAI early this week released a higher-performing and even more human-like version of the artificial intelligence technology that underpins ChatGPT, making it free to all users.

"It feels like AI from the movies," Altman said in a blog post.

Altman has previously pointed to the Scarlett Johansson character in the movie "Her," where she voices an AI-based virtual assistant dating a man, as an inspiration for where he would like AI interactions to go.

The day will come when "digital brains will become as good and even better than our own," Sutskever said during a talk at a TED AI summit in San Francisco late last year.

"AGI will have a dramatic impact on every area of life."

Go here to see the original:

OpenAI disbands team devoted to artificial intelligence risks - Port Lavaca Wave

OpenAI researcher resigns, claiming safety has taken a backseat to shiny products – The Verge

Jan Leike, a key OpenAI researcher who resigned earlier this week following the departure of co-founder Ilya Sutskever, posted on X Friday morning that safety culture and processes have taken a backseat to shiny products at the company.

Leikes statements came after Wired reported that OpenAI had disbanded the team dedicated to addressing long-term AI risks (called the Superalignment team) altogether. Leike had been running the Superalignment team, which formed last July to solve the core technical challenges in implementing safety protocols as OpenAI developedAI that can reason like a human.

The original idea for OpenAI was to openly provide their models to the public, hence the organizations name, but theyve become proprietary knowledge due to the companys claims that allowing such powerful models to be accessed by anyone could be potentially destructive.

We are long overdue in getting incredibly serious about the implications of AGI. We must prioritize preparing for them as best we can, Leike said in follow-up posts about his resignation Friday morning. Only then can we ensure AGI benefits all of humanity.

The Verge reported earlier this week that John Schulman, another OpenAI co-founder who supported Altman during last years unsuccessful board coup, will assume Leikes responsibilities. Sutskever, who played a key role in the notorious failed coup against Sam Altman, announced his departure on Tuesday.

Over the past years, safety culture and processes have taken a backseat to shiny products, Leike posted.

Leikes posts highlight an increasing tension within OpenAI. As researchers race to develop artificial general intelligence while managing consumer AI products like ChatGPT and DALL-E, employees like Leike are raising concerns about the potential dangers of creating super-intelligent AI models. Leike said his team was deprioritized and couldnt get compute and other resources to perform crucial work.

I joined because I thought OpenAI would be the best place in the world to do this research, Leike wrote. However, I have been disagreeing with OpenAI leadership about the companys core priorities for quite some time, until we finally reached a breaking point.

Read the original post:

OpenAI researcher resigns, claiming safety has taken a backseat to shiny products - The Verge