Archive for the ‘Artificial General Intelligence’ Category

ChatGPT maker OpenAI now has a scale to rank its AI – ReadWrite

An OpenAI spokesperson has shared the companys new 5-tier system for ranking its progress toward achieving Artificial General Intelligence (AGI), reports Bloomberg.

The levels, which were announced by the company behind ChatGPT internally at an all-hands meeting before being shared externally, are designed to guide thinking about artificial intelligence (AI) and its capabilities as the company works to develop models with real reasoning capabilities.

The levels in the system were outlined like this:

AGI is the long-term goal for many companies involved in the AI arms race, including Mark Zuckerbergs Meta.

While OpenAI believes they are currently at level 1, their spokesperson said they are on the cusp of reaching the second level, Reasoners.

During the all-hands meeting where the new levels were announced, OpenAI also demonstrated some new research centered around its GPT4 model, which they believe shows skills approaching human-level reasoning.

The levels, which were designed by OpenAIs senior leadership team and executives, are not considered final. As the organization gathers feedback and additional input from its employees and investors, it may alter the levels and definitions over time to better fit the broader understanding of AI progress.

OpenAIs stated mission is to develop safe and beneficial artificial general intelligence for the benefit of humanity, however, earlier this year the company effectively dissolved its safety-oriented Superalignment group after the departure of Chief Scientist and co-founder Ilya Sutskever. This has led to questions being raised about whether the company can truly live up to its mission statement.

Featured image credit: generated with Ideogram

View original post here:

ChatGPT maker OpenAI now has a scale to rank its AI - ReadWrite

Heres how OpenAI will determine how powerful its AI systems are – The Verge

OpenAI has created an internal scale to track the progress its large language models are making toward artificial general intelligence, or AI with human-like intelligence, a spokesperson told Bloomberg.

Todays chatbots, like ChatGPT, are at Level 1. OpenAI claims it is nearing Level 2, defined as a system that can solve basic problems at the level of a person with a PhD. Level 3 refers to AI agents capable of taking actions on a users behalf. Level 4 involves AI that can create new innovations. Level 5, the final step to achieving AGI, is AI that can perform the work of entire organizations of people. OpenAI has previously defined AGI as a highly autonomous system surpassing humans in most economically valuable tasks.

OpenAIs unique structure is centered around its mission of achieving AGI, and how OpenAI defines AGI is important. The company has said that if a value-aligned, safety-conscious project comes close to building AGI before OpenAI does, it commits to not competing with the project and dropping everything to assist. The phrasing of this in OpenAIs charter is vague, leaving room for the judgment of the for-profit entity (governed by the nonprofit), but a scale that OpenAI can test itself and competitors on could help dictate when AGI is reached in clearer terms.

Still, AGI is still quite a ways away: it will take billions upon billions of dollars worth of computing power to reach AGI, if at all. Timelines from experts, and even at OpenAI, vary wildly. In October 2023, OpenAI CEO Sam Altman said we are five years, give or take, before reaching AGI.

This new grading scale, though still under development, was introduced a day after OpenAI announced its collaboration with Los Alamos National Laboratory, which aims to explore how advanced AI models like GPT-4o can safely assist in bioscientific research. A program manager at Los Alamos, responsible for the national security biology portfolio and instrumental in securing the OpenAI partnership, told The Verge that the goal is to test GPT-4os capabilities and establish a set of safety and other factors for the US government. Eventually, public or private models can be tested against these factors to evaluate their own models.

In May, OpenAI dissolved its safety team after the groups leader, OpenAI cofounder Ilya Sutskever, left the company. Jan Leike, a key OpenAI researcher, resigned shortly afterclaiming in a post that safety culture and processes have taken a backseat to shiny products at the company. While OpenAI denied that was the case, some are concerned about what this means if the company does in fact reach AGI.

OpenAI hasnt provided details on how it assigns models to these internal levels (and declined The Verges request for comment). However, company leaders demonstrated a research project using the GPT-4 AI model during an all-hands meeting on Thursday and believe this project showcases some new skills that exhibit human-like reasoning, according to Bloomberg.

This scale could help provide a strict definition of progress, rather than leaving it up for interpretation. For instance, OpenAI CTO Mira Murati said in an interview in June that the models in its labs are not much better than what the public has already. Meanwhile, CEO Sam Altman said late last year that the company recently pushed the veil of ignorance back, meaning the models are remarkably more intelligent.

Excerpt from:

Heres how OpenAI will determine how powerful its AI systems are - The Verge

OpenAI may be working on AI that can perform research without human help which should go fine – TechRadar

OpenAI is developing a new project to enhance its AI models' reasoning capabilities, called Strawberry, according to documents first discovered by Reuters. The project is a key element in the efforts by OpenAI to achieve more powerful AI models capable of operating on their own when it comes to performing research.

According to the internal documents Reuters looked at, Strawberry is aimed at building an AI that will not only answer questions but search around online and perform follow-up research on its own. This so-called deep research trick would be a major leap beyond current AI models that rely on existing data sets and respond in ways that are already programmed.

There aren't details on the exact mechanisms of Strawberry, but apparently, it involves AI models using a specialized processing method after training on extensive datasets. This innovative approach could potentially set a new standard in AI development. An AI that can think ahead and perform research on its own to understand the world is much closer to a human than anything ChatGPT or other tools using AI models offer. It's a challenging goal that has eluded AI developers to date, despite numerous advancements in the field.

Reuters reported that Strawberry, which was then known as Q*, had made some breakthroughs. There were demonstrations where viewers witnessed AI could tackle science and math problems beyond the range of commercial models, and apparently, OpenAI had tested AI models that scored over 90% on a championship-level math problem data set.

Should OpenAI achieve its goals, the reasoning capabilities could transform scientific research and everyday problem-solving. It could help plug holes in scientific knowledge by looking for gaps and even offering up hypotheses to fill them. This would vastly accelerate the pace of discovery in various domains.

If successful, Strawberry could mark a pivotal moment in AI research, bringing us closer to truly autonomous AI systems capable of conducting independent research and offering more sophisticated reasoning. Strawberry is, it seems, part and parcel of OpenAIs long-term plans to demonstrate and enhance the potential of its AI models.

Even after GPT-3 and GPT-4 set new benchmarks for language processing and generation, there's a big leap to autonomous reasoning and deep research. But, it fits with other work on the road to artificial general intelligence (AGI), including the recent development of an internal scale for charting the progress of large language models.

Sign up for breaking news, reviews, opinion, top tech deals, and more.

Read the original post:

OpenAI may be working on AI that can perform research without human help which should go fine - TechRadar

OpenAI has a new scale for measuring how smart their AI models are becoming which is not as comforting as it should be – TechRadar

OpenAI has developed an internal scale for charting the progress of its large language models moving toward artificial general intelligence (AGI), according to a report from Bloomberg.

AGI usually means AI with human-like intelligence and is considered the broad goal for AI developers. In earlier references, OpenAI defined AGI as "a highly autonomous system surpassing humans in most economically valuable tasks." That's a point far beyond current AI capabilities. This new scale aims to provide a structured framework for tracking the advancements and setting benchmarks in that pursuit.

The scale introduced by OpenAI breaks down the progress into five levels or milestones on the path to AGI. ChatGPT and its rival chatbots are Level 1. OpenAI claimed to be on the brink of reaching Level 2, which would be an AI system capable of matching a human with a PhD when it comes to solving basic problems. That might be a reference to GPT-5, which OpenAI CEO Sam Altman has said will be a "significant leap forward." After Level 2, the levels become increasingly complex. Level 3 would be an AI agent capable of handling tasks for you without you being there, while a Level 4 AI would actually invent new ideas and concepts. At Level 5, the AI would not only be able to take over tasks for an individual but for entire organizations.

The level idea makes sense for OpenAI or really any developer. In fact, a comprehensive framework not only helps OpenAI internally but may also set a universal standard that could be applied to evaluate other AI models.

Still, achieving AGI is not going to happen immediately. Previous comments by Altman and others at OpenAI suggest as little as five years, but timelines vary significantly among experts. The amount of computing power necessary and the financial and technological challenges are substantial.

That's on top of the ethics and safety questions sparked by AGI. There's some very real concern about what AI at that level would mean for society. And OpenAI's recent moves may not reassure anyone. In May, the company dissolved its safety team following the departure of its leader and OpenAI co-founder, Ilya Sutskever. High-level researcher Jan Leike also quit, citing concerns that OpenAI's safety culture was being ignored. Nonetheless, By offering a structured framework, OpenAI aims to set concrete benchmarks for its models and those of its competitors and maybe help all of us prepare for what's coming.

Sign up for breaking news, reviews, opinion, top tech deals, and more.

Go here to see the original:

OpenAI has a new scale for measuring how smart their AI models are becoming which is not as comforting as it should be - TechRadar

OpenAI says there are 5 ‘levels’ for AI to reach human intelligence it’s already almost at level 2 – Quartz

OpenAI CEO Sam Altman at the AI Insight Forum in the Russell Senate Office Building on Capitol Hill on September 13, 2023 in Washington, D.C. Photo: Chip Somodevilla ( Getty Images )

OpenAI is undoubtedly one of the leaders in the race to reach human-level artificial intelligence and its reportedly four steps away from getting there.

Prime Day: The first half of Amazon's 48-hour sales event led to the biggest U.S. e-commerce day so far in 2024

The company shared a five-level system it developed to track its artificial general intelligence, or AGI, progress with employees this week, an OpenAI spokesperson told Bloomberg. The levels go from the currently available conversational AI to AI that can perform the same amount of work as an organization. OpenAI will reportedly share the levels with investors and people outside the company.

While OpenAI executives believe it is on the first level, the spokesperson said it is close to level two, which is defined as Reasoners, or AI that can perform basic problem-solving and is on the level of a human with a doctorate degree but no access to tools. The third level of OpenAIs system is reportedly called Agents, and is AI that can perform different actions for several days on behalf of its user. The fourth level is reportedly called Innovators, and describes AI that can help develop new inventions.

OpenAI leaders also showed employees a research project with GPT-4 that demonstrated it has human-like reasoning skills, Bloomberg reported, citing an unnamed person familiar with the matter. The company declined to comment further.

The system was reportedly developed by OpenAI executives and leaders who can eventually change the levels based on feedback from employees, investors, and the companys board.

In May, OpenAI disbanded its Superalignment team, which was responsible for working on the problem of AIs existential dangers. The company said the teams work would be absorbed by other research efforts across OpenAI.

See the rest here:

OpenAI says there are 5 'levels' for AI to reach human intelligence it's already almost at level 2 - Quartz