Archive for the ‘Artificial General Intelligence’ Category

Heres how OpenAI will determine how powerful its AI systems are – The Verge

OpenAI has created an internal scale to track the progress its large language models are making toward artificial general intelligence, or AI with human-like intelligence, a spokesperson told Bloomberg.

Todays chatbots, like ChatGPT, are at Level 1. OpenAI claims it is nearing Level 2, defined as a system that can solve basic problems at the level of a person with a PhD. Level 3 refers to AI agents capable of taking actions on a users behalf. Level 4 involves AI that can create new innovations. Level 5, the final step to achieving AGI, is AI that can perform the work of entire organizations of people. OpenAI has previously defined AGI as a highly autonomous system surpassing humans in most economically valuable tasks.

OpenAIs unique structure is centered around its mission of achieving AGI, and how OpenAI defines AGI is important. The company has said that if a value-aligned, safety-conscious project comes close to building AGI before OpenAI does, it commits to not competing with the project and dropping everything to assist. The phrasing of this in OpenAIs charter is vague, leaving room for the judgment of the for-profit entity (governed by the nonprofit), but a scale that OpenAI can test itself and competitors on could help dictate when AGI is reached in clearer terms.

Still, AGI is still quite a ways away: it will take billions upon billions of dollars worth of computing power to reach AGI, if at all. Timelines from experts, and even at OpenAI, vary wildly. In October 2023, OpenAI CEO Sam Altman said we are five years, give or take, before reaching AGI.

This new grading scale, though still under development, was introduced a day after OpenAI announced its collaboration with Los Alamos National Laboratory, which aims to explore how advanced AI models like GPT-4o can safely assist in bioscientific research. A program manager at Los Alamos, responsible for the national security biology portfolio and instrumental in securing the OpenAI partnership, told The Verge that the goal is to test GPT-4os capabilities and establish a set of safety and other factors for the US government. Eventually, public or private models can be tested against these factors to evaluate their own models.

In May, OpenAI dissolved its safety team after the groups leader, OpenAI cofounder Ilya Sutskever, left the company. Jan Leike, a key OpenAI researcher, resigned shortly afterclaiming in a post that safety culture and processes have taken a backseat to shiny products at the company. While OpenAI denied that was the case, some are concerned about what this means if the company does in fact reach AGI.

OpenAI hasnt provided details on how it assigns models to these internal levels (and declined The Verges request for comment). However, company leaders demonstrated a research project using the GPT-4 AI model during an all-hands meeting on Thursday and believe this project showcases some new skills that exhibit human-like reasoning, according to Bloomberg.

This scale could help provide a strict definition of progress, rather than leaving it up for interpretation. For instance, OpenAI CTO Mira Murati said in an interview in June that the models in its labs are not much better than what the public has already. Meanwhile, CEO Sam Altman said late last year that the company recently pushed the veil of ignorance back, meaning the models are remarkably more intelligent.

Excerpt from:

Heres how OpenAI will determine how powerful its AI systems are - The Verge

OpenAI may be working on AI that can perform research without human help which should go fine – TechRadar

OpenAI is developing a new project to enhance its AI models' reasoning capabilities, called Strawberry, according to documents first discovered by Reuters. The project is a key element in the efforts by OpenAI to achieve more powerful AI models capable of operating on their own when it comes to performing research.

According to the internal documents Reuters looked at, Strawberry is aimed at building an AI that will not only answer questions but search around online and perform follow-up research on its own. This so-called deep research trick would be a major leap beyond current AI models that rely on existing data sets and respond in ways that are already programmed.

There aren't details on the exact mechanisms of Strawberry, but apparently, it involves AI models using a specialized processing method after training on extensive datasets. This innovative approach could potentially set a new standard in AI development. An AI that can think ahead and perform research on its own to understand the world is much closer to a human than anything ChatGPT or other tools using AI models offer. It's a challenging goal that has eluded AI developers to date, despite numerous advancements in the field.

Reuters reported that Strawberry, which was then known as Q*, had made some breakthroughs. There were demonstrations where viewers witnessed AI could tackle science and math problems beyond the range of commercial models, and apparently, OpenAI had tested AI models that scored over 90% on a championship-level math problem data set.

Should OpenAI achieve its goals, the reasoning capabilities could transform scientific research and everyday problem-solving. It could help plug holes in scientific knowledge by looking for gaps and even offering up hypotheses to fill them. This would vastly accelerate the pace of discovery in various domains.

If successful, Strawberry could mark a pivotal moment in AI research, bringing us closer to truly autonomous AI systems capable of conducting independent research and offering more sophisticated reasoning. Strawberry is, it seems, part and parcel of OpenAIs long-term plans to demonstrate and enhance the potential of its AI models.

Even after GPT-3 and GPT-4 set new benchmarks for language processing and generation, there's a big leap to autonomous reasoning and deep research. But, it fits with other work on the road to artificial general intelligence (AGI), including the recent development of an internal scale for charting the progress of large language models.

Sign up for breaking news, reviews, opinion, top tech deals, and more.

Read the original post:

OpenAI may be working on AI that can perform research without human help which should go fine - TechRadar

OpenAI has a new scale for measuring how smart their AI models are becoming which is not as comforting as it should be – TechRadar

OpenAI has developed an internal scale for charting the progress of its large language models moving toward artificial general intelligence (AGI), according to a report from Bloomberg.

AGI usually means AI with human-like intelligence and is considered the broad goal for AI developers. In earlier references, OpenAI defined AGI as "a highly autonomous system surpassing humans in most economically valuable tasks." That's a point far beyond current AI capabilities. This new scale aims to provide a structured framework for tracking the advancements and setting benchmarks in that pursuit.

The scale introduced by OpenAI breaks down the progress into five levels or milestones on the path to AGI. ChatGPT and its rival chatbots are Level 1. OpenAI claimed to be on the brink of reaching Level 2, which would be an AI system capable of matching a human with a PhD when it comes to solving basic problems. That might be a reference to GPT-5, which OpenAI CEO Sam Altman has said will be a "significant leap forward." After Level 2, the levels become increasingly complex. Level 3 would be an AI agent capable of handling tasks for you without you being there, while a Level 4 AI would actually invent new ideas and concepts. At Level 5, the AI would not only be able to take over tasks for an individual but for entire organizations.

The level idea makes sense for OpenAI or really any developer. In fact, a comprehensive framework not only helps OpenAI internally but may also set a universal standard that could be applied to evaluate other AI models.

Still, achieving AGI is not going to happen immediately. Previous comments by Altman and others at OpenAI suggest as little as five years, but timelines vary significantly among experts. The amount of computing power necessary and the financial and technological challenges are substantial.

That's on top of the ethics and safety questions sparked by AGI. There's some very real concern about what AI at that level would mean for society. And OpenAI's recent moves may not reassure anyone. In May, the company dissolved its safety team following the departure of its leader and OpenAI co-founder, Ilya Sutskever. High-level researcher Jan Leike also quit, citing concerns that OpenAI's safety culture was being ignored. Nonetheless, By offering a structured framework, OpenAI aims to set concrete benchmarks for its models and those of its competitors and maybe help all of us prepare for what's coming.

Sign up for breaking news, reviews, opinion, top tech deals, and more.

Go here to see the original:

OpenAI has a new scale for measuring how smart their AI models are becoming which is not as comforting as it should be - TechRadar

OpenAI says there are 5 ‘levels’ for AI to reach human intelligence it’s already almost at level 2 – Quartz

OpenAI CEO Sam Altman at the AI Insight Forum in the Russell Senate Office Building on Capitol Hill on September 13, 2023 in Washington, D.C. Photo: Chip Somodevilla ( Getty Images )

OpenAI is undoubtedly one of the leaders in the race to reach human-level artificial intelligence and its reportedly four steps away from getting there.

Prime Day: The first half of Amazon's 48-hour sales event led to the biggest U.S. e-commerce day so far in 2024

The company shared a five-level system it developed to track its artificial general intelligence, or AGI, progress with employees this week, an OpenAI spokesperson told Bloomberg. The levels go from the currently available conversational AI to AI that can perform the same amount of work as an organization. OpenAI will reportedly share the levels with investors and people outside the company.

While OpenAI executives believe it is on the first level, the spokesperson said it is close to level two, which is defined as Reasoners, or AI that can perform basic problem-solving and is on the level of a human with a doctorate degree but no access to tools. The third level of OpenAIs system is reportedly called Agents, and is AI that can perform different actions for several days on behalf of its user. The fourth level is reportedly called Innovators, and describes AI that can help develop new inventions.

OpenAI leaders also showed employees a research project with GPT-4 that demonstrated it has human-like reasoning skills, Bloomberg reported, citing an unnamed person familiar with the matter. The company declined to comment further.

The system was reportedly developed by OpenAI executives and leaders who can eventually change the levels based on feedback from employees, investors, and the companys board.

In May, OpenAI disbanded its Superalignment team, which was responsible for working on the problem of AIs existential dangers. The company said the teams work would be absorbed by other research efforts across OpenAI.

See the rest here:

OpenAI says there are 5 'levels' for AI to reach human intelligence it's already almost at level 2 - Quartz

AIs Bizarro World, were marching towards AGI while carbon emissions soar – Fortune

Happy Friday! Ive been covering AI as a daily beat for two and a half years now, but recently Ive been feeling like we are living in a kind of Bizarro World, the fictional planet in DC Comics (also made famous in Seinfeld) where everything is oppositebeauty is hated, ugliness is prized, goodbye is helloleading to distorted societal norms, moral values, and logical reasoning.

In AIs Bizarro World, a company like OpenAI can blithely tell employees about creating a five-point checklist to track progress toward building artificial general intelligence (AGI), or AI that is capable of outperforming humans, as Bloomberg reported yesterdayin a bid towards developing AGI that benefits all of humanity. At the same time, media headlines can blare about Google and Microsofts soaring carbon emissions due to computationally intensive and power-hungry generative AI modelsto the detriment of all of humanity.

In AIs Bizarro World, the public is encouragedand increasingly mandated by their employersto use tools like OpenAIs ChatGPT and Googles Gemini to increase productivity and boost efficiency (or, lets be honest, just save a little bit of mental energy). In the meantime, according to a report by Goldman Sachs, a ChatGPT query needs nearly 10 times as much electricity as a Google search query. So while millions of Americans are advised to turn down their air conditioning to conserve energy, millions are also asking ChatGPT for an energy-sucking synonym, recipe, or haiku.

In AIs Bizarro World, AI frontier model companies including OpenAI, Anthropic, and Mistral can raise billions of dollars at massive valuations to develop their models, but it is the companies with the picks and shovels they rely onhello, Nvidia GPUsthat rake in the most money and stock market value for their energy-intensive processes and physical parts.

In AIs Bizarro World, Elon Musk can volunteer his sperm for those looking to procreate in a planned Martian city built by SpaceX, while a proposed supercomputer in Memphis, meant for his AI company X.ai, is expected to add about 150 megawatts to the electric grids peak demandan amount that could power tens of thousands of homes.

Of course, there is always a certain amount of madness that goes along with developing new technologies. And the potential for advanced AI systems to help tackle climate change issuesto predict weather, identify pollution, or improve agriculture, for exampleis real. In addition, the massive costs of developing and running sophisticated AI models will likely continue to put pressure on companies to make them more energy-efficient.

Still, as Silicon Valley and the rest of California suffer through ever-hotter summers and restricted water use, it seems like sheer lunacy to simply march towards the development of AGI without being equally concerned about data centers guzzling scarce water resources, AI computing power burning excess electricity, and Big Tech companies quietly stepping away from previously touted climate goals. I dont want Bizarro Superman to guide us toward an AGI future on Bizarro World. I just want a sustainable future on earthand hopefully, AI can be a part of it.

Sharon Goldman sharon.goldman@fortune.com

Want to send thoughts or suggestions to Data Sheet? Drop a line here.

Todays edition of Data Sheet was curated by David Meyer.

X could face EU fine. The European Commission says Elon Musks X has broken the new Digital Services Act, which governs online content, in multiple ways. That includes deceiving users into thinking its paid-for blue checkmarks denote authenticity, not complying with rules about ad transparency, and stopping researchers from accessing its public data. X now gets to defend itself, but, if the Commission confirms its preliminary findings, it could issue a fine of up to 6% of global revenue and demand big changes to how X operates.

Apple antitrust. An investigation by Indias antitrust body found that Apple has been abusing its position as App Store proprietor by forcing developers to use its billing and payments systems, Reuters reports. Again, the regulator can hit Apple with a fine and tell it to change its ways.

SoftBank buys Graphcore. Japans SoftBank, which has been promising to go all in on AI, has bought the British AI chip company Graphcore. Graphcore, which counts Nvidia and Arm among its rivals, had been hemorrhaging money for a couple years and was desperately seeking a buyer. According to TechCrunch, Graphcore CEO Nigel Toon dismissed the reported $500 million figure for the acquisition as inaccurate, but the companies arent providing financial details about the deal.

The number of AT&T customers affected by someones illegal downloading of call and text records relating to several months in 2022. The FBI is involved and one person has been arrested, Reuters reports. AT&T reckons the data is not publicly available.

Tesla walks back Robotaxi reveal, sending its stock plummeting, by Bloomberg

65,000 mugs have gone missing at Teslas German factory, by Marco Quiroz-Gutierrez

Amazons $20 billion NBA deal isnt riskless. But its close, by Jason Del Rey

Amazon trails behind in latest U.K. compliance test and is threatened with investigation over poor supplier treatment, by Bloomberg

70,000 students are already using AI textbooks, by Sage Lazzaro

How we raised $100 million for my Silicon Valley startup in a down market, by Amir Khan (Commentary)

This 84-year-old quit an elite job and went $160K into debt to launch his career. Now hes suing ChatGPT to protect writers like him from highway robbery, by the Associated Press

COPIED Act. Theres a bipartisan push in the Senate to give artists and journalists more protection against voracious AI models. As The Verge reports, the Content Origin Protection and Integrity from Edited and Deepfaked Media (COPIED) Act would see the creation of security measures that could be added to content to prove its origin and potentially block its use in training AI models. Removing or tampering with these watermarks would be illegal.

Link:

AIs Bizarro World, were marching towards AGI while carbon emissions soar - Fortune