Archive for the ‘Artificial General Intelligence’ Category

OpenAIs Project Strawberry Said to Be Building AI That Reasons and Does Deep Research – Singularity Hub

Despite their uncanny language skills, todays leading AI chatbots still struggle with reasoning. A secretive new project from OpenAI could reportedly be on the verge of changing that.

While todays large language models can already carry out a host of useful tasks, theyre still a long way from replicating the kind of problem-solving capabilities humans have. In particular, theyre not good at dealing with challenges that require them to take multiple steps to reach a solution.

Imbuing AI with those kinds of skills would greatly increase its utility and has been a major focus for many of the leading research labs. According to recent reports, OpenAI may be close to a breakthrough in this area.

An article in Reutersclaimed its journalists had been shown an internal document from the company discussing a project code-named Strawberry that is building models capable of planning, navigating the internet autonomously, and carrying out what OpenAI refers to as deep research.

A separate story from Bloomberg said the company had demoed research at a recent all-hands meeting that gave its GPT-4 model skills described as similar to human reasoning abilities. Its unclear whether the demo was part of project Strawberry.

According, to the Reuters report, project Strawberry is an extension of the Q* project that was revealed last year just before OpenAI CEO Sam Altman was ousted by the board. The model in question was supposedly capable of solving grade-school math problems.

That might sound innocuous, but some inside the company believed it signaled a breakthrough in problem-solving capabilities that could accelerate progress towards artificial general intelligence, or AGI. Math has long been an Achilles heel for large language models, and capabilities in this area are seen as a good proxy for reasoning skills.

A source told Reuters that OpenAI has tested a model internally that achieved a 90 percent score on a challenging test of AI math skills, though it again couldnt confirm if this was related to project Strawberry. But another two sources reported seeing demos from the Q* project that involved models solving math and science questions that would be beyond todays leading commercial AIs.

Exactly how OpenAI has achieved these enhanced capabilities is unclear at present. The Reuters report notes that Strawberry involves fine-tuning OpenAIs existing large language models, which have already been trained on reams of data. The approach, according to the article, is similar to one detailed in a 2022 paper from Stanford researchers called Self-Taught Reasoner or STaR.

That method builds on a concept known as chain-of-thought prompting, in which a large language model is asked to explain the reasoning steps behind its answer to a query. In the STaR paper, the authors showed an AI model a handful of these chain-of-thought rationales as examples and then asked it to come up with answers and rationales for a large number of questions.

If it got the question wrong, the researchers would show the model the correct answer and then ask it to come up with a new rationale. The model was then fine-tuned on all of the rationales that led to a correct answer, and the process was repeated. This led to significantly improved performance on multiple datasets, and the researchers note that the approach effectively allowed the model to self-improve by training on reasoning data it had produced itself.

How closely Strawberry mimics this approach is unclear, but if it relies on self-generated data, that could be significant. The holy grail for many AI researchers is recursive self-improvement, in which weak AI can enhance its own capabilities to bootstrap itself to higher orders of intelligence.

However, its important to take vague leaks from commercial AI research labs with a pinch of salt. These companies are highly motivated to give the appearance of rapid progress behind the scenes.

The fact that project Strawberry seems to be little more than a rebranding of Q*, which was first reported over six months ago, should give pause. As far as concrete results go, publicly demonstrated progress has been fairly incremental, with the most recent AI releases from OpenAI, Google, and Anthropic providing modest improvements over previous versions.

At the same time, it would be unwise to discount the possibility of a significant breakthrough. Leading AI companies have been pouring billions of dollars into making the next great leap in performance, and reasoning has been an obvious bottleneck on which to focus resources. If OpenAI has genuinely made a significant advance, it probably wont be long until we find out.

Image Credit:gemenu /Pixabay

Read more:

OpenAIs Project Strawberry Said to Be Building AI That Reasons and Does Deep Research - Singularity Hub

One of the Best Ways to Invest in AI Is Dont – InvestorPlace

Hello, Reader.

Investment trends come and go.

Their fleeting nature is a reminder that whats hot today may be forgotten tomorrow.

So, rather than succumbing to FOMO, or the fear of missing out, savvy investors often find value in embracing JOMO the joy of missing out. Or, I should say, not missing out entirely, but rather looking where others arent.

This perspective can lead to unique opportunities, particularly in the world of artificial intelligence. While AI isnt an investment trend thats likely to let up anytime soon and is one I will continue to follow I do believe that one of the best ways to invest in AI may be to invest in what it isnt.

In other words, invest in the industries or assets that AI could never replace. Not even the Artificial General Intelligence (AGI) that is on its way.

No matter how intelligent AI becomes, it will never morph into timberland. It will never sprout into a lemon tree or transform itself into an ocean freighter, platinum ingot, espresso bean, or stretch of sandy beach.

A select few industries are so future proof that they deserve our attention and a place in our portfolios.

So, in todays Smart Money, well explore my AI future-proof investing strategy and its potential for long-term success. Ill even reveal several specific sectors that could help secure and increase your wealth as we go further along the road to AGI.

Lets dive in

Admittedly, the biggest gains from the next few years will come from investing directly in technologies that either facilitate AI or benefit from it.

But this high-reward approach also entails relatively high risks, simply because the future capabilities of AI are a known unknown. They are difficult to specify or quantify at this stage.

Perhaps, for example, a technology that facilitates the early stages of AIs development could become a victim of AGIs later development. In other words, it is a certainty that AI will continuously create and destroy tech-centric businesses, as it grows and matures.

Therefore, I suspect a two-pronged approach to AI investing could deliver the optimal balance between risk and reward.

The first prong is to invest directly in the technologies or industries that seem likely to prosper in an increasingly AI-centric world. Many pharmaceutical and biotech companies would fall into this category. (Thats why I am keeping my eye on AGI, in which AI systems are trained to achieve true human-like intelligence. I will present my findings soon in my premium service, The Speculator, so watch out for an invitation to that.)

Investing directly in AI beneficiaries offers the greatest promise of capturing future 10-baggers and staying ahead of the creative-distraction curve. But we are unlikely to connect on every swing.

Thats why the second prong of my AI strategy is so valuable and essential.

This prong focuses on investing in the industries or assets that AI will never replace. These are things that an AI-centric world will require, no matter how intelligent it becomes.

A short list of examples might include industries like

These industries might not be completely future-proof from the onslaught of AI, but they are at least close to it.

To expand, AI will certainly create fleets of completely autonomous, self-piloting freighters at some point. AI might also overhaul the drivetrains and/or fuel sources that power these ships, but it will not replace the ships themselves or the need to transport bulk goods across the Seven Seas.

Similarly, AI will not eliminate the need for trains or planes. Neither will it end demand for lumber, wheat, or pineapples. And it will not curb the human desire to travel. For as long as the robots of the future allow us humans to travel, we will continue to do so.

Importantly, many future-proof industries not only offer protection from the destructive side of AI, but they could also benefit immensely from its creative side. In many of these old-school industries, new AI-enabled processes could boost their efficiency and fatten profit margins.

Consider, for example, how AI might influence how people travel and enhance the overall travel experience

These AI-enabled enhancements will not only improve travel experiences, but also boost the profitability of travel and tourism companies, all else being equal.

Investing in indispensable, future-proof industries like shipping or travel might not deliver spectacular gains over the coming years, but they should provide more reliable gains than what many AI-focused tech stocks will deliver.

In fact, as continue along the road to AGI (as well be discussing in much greater detail soon at The Speculator), the worlds wealthiest investors have been moving their money out of the tech sector in whats being dubbed The Great Cash-Out.

If you have any money in the markets especially in tech stocks youll want to prepare for this coming exodus. Although JOMO has its place, this movement is one you wont want to miss out on.

So, check out this video from me for all the details.

Regards,

Eric Fry

More here:

One of the Best Ways to Invest in AI Is Dont - InvestorPlace

OpenAI is plagued by safety concerns – The Verge

OpenAI is a leader in the race to develop AI as intelligent as a human. Yet, employees continue to show up in the press and on podcasts to voice their grave concerns about safety at the $80 billion nonprofit research lab. The latest comes from The Washington Post, where an anonymous source claimed OpenAI rushed through safety tests and celebrated its product before ensuring its safety.

They planned the launch after-party prior to knowing if it was safe to launch, an anonymous employee told The Washington Post. We basically failed at the process.

Safety issues loom large at OpenAI and seem to just keep coming. Current and former employees at OpenAI recently signed an open letter demanding better safety and transparency practices from the startup, not long after its safety team was dissolved following the departure of cofounder Ilya Sutskever. Jan Leike, a key OpenAI researcher,resigned shortly after,claiming in a post that safety culture and processes have taken a backseat to shiny products at the company.

Safety is core to OpenAIs charter, with a clause that claims OpenAI will assist other organizations to advance safety if artificial general intelligence, or AGI, is reached at a competitor instead of continuing to compete. It claims to be dedicated to solving the safety problems inherent to such a large, complex system. OpenAI even keeps its proprietary models private, rather than open (causing jabs and lawsuits), for the sake of safety. The warnings make it sound as though safety has been deprioritized despite being so paramount to the culture and structure of the company.

Its clear that OpenAI is in the hot seat but public relations efforts alone wont suffice to safeguard society

Were proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk, OpenAI spokesperson Taya Christianson said in a statement to The Verge. Rigorous debate is critical given the significance of this technology, and we will continue to engage with governments, civil society and other communities around the world in service of our mission.

The stakes around safety, according to OpenAI and others studying the emergent technology, are immense. Current frontier AI development poses urgent and growing risks to national security, a report commissioned by the US State Department in March said. The rise of advanced AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons.

The alarm bells at OpenAI also follow the boardroom coup last year that briefly ousted CEO Sam Altman. The board said he was removed due to a failure to be consistently candid in his communications, leading to an investigation that did little to reassure the staff.

OpenAI spokesperson Lindsey Held told the Post the GPT-4o launch didnt cut corners on safety, but another unnamed company representative acknowledged that the safety review timeline was compressed to a single week. We are rethinking our whole way of doing it, the anonymous representative told the Post. This [was] just not the best way to do it.

Do you know more about whats going on inside OpenAI? Id love to chat. You can reach me securely on Signal, where Im @kylie.01, or via email at kylie@theverge.com.

In the face of rolling controversies (remember the Her incident?), OpenAI has attempted to quell fears with a few well-timed announcements. This week, itannounced it is teaming up with Los Alamos National Laboratory to explore how advanced AI models, such as GPT-4o, can safely aid in bioscientific research, and in the same announcement, it repeatedly pointed to Los Alamos own safety record. The next day, an anonymous spokesperson told Bloomberg that OpenAI created an internal scale to track the progress its large language models are making toward artificial general intelligence.

This weeks safety-focused announcements from OpenAI appear to be defensive window dressing in the face of growing criticism of its safety practices. Its clear that OpenAI is in the hot seat but public relations efforts alone wont suffice to safeguard society. What truly matters is the potential impact on those beyond the Silicon Valley bubble if OpenAI continues to fail to develop AI with strict safety protocols, as those internally claim: the average person doesnt have a say in the development of privatized AGI, and yet they have no choice in how protected theyll be from OpenAIs creations.

AI tools can be revolutionary, FTC Chair Lina Khan told Bloomberg in November. But as of right now, she said, there are concerns that the critical inputs of these tools are controlled by a relatively small number of companies.

If the numerous claims against the companys safety protocols are accurate, this surely raises serious questions about OpenAIs fitness for this role as steward of AGI, a role that the organization has essentially assigned to itself. Allowing one group in San Francisco to control potentially society-altering technology is cause for concern, and theres an urgent demand even within its own ranks for transparency and safety now more than ever.

View original post here:

OpenAI is plagued by safety concerns - The Verge

OpenAI reportedly nears breakthrough with reasoning AI, reveals progress framework – Ars Technica

OpenAI recently unveiled a five-tier system to gauge its advancement toward developing artificial general intelligence (AGI), according to an OpenAI spokesperson who spoke with Bloomberg. The company shared this new classification system on Tuesday with employees during an all-hands meeting, aiming to provide a clear framework for understanding AI advancement. However, the system describes hypothetical technology that does not yet exist and is possibly best interpreted as a marketing move to garner investment dollars.

OpenAI has previously stated that AGIa nebulous term for a hypothetical concept that means an AI system that can perform novel tasks like a human without specialized trainingis currently the primary goal of the company. The pursuit of technology that can replace humans at most intellectual work drives most of the enduring hype over the firm, even though such a technology would likely be wildly disruptive to society.

OpenAI CEO Sam Altman has previously stated his belief that AGI could be achieved within this decade, and a large part of the CEO's public messaging has been related to how the company (and society in general) might handle the disruption that AGI may bring. Along those lines, a ranking system to communicate AI milestones achieved internally on the path to AGI makes sense.

OpenAI's five levelswhich it plans to share with investorsrange from current AI capabilities to systems that could potentially manage entire organizations. The company believes its technology (such as GPT-4o that powers ChatGPT) currently sits at Level 1, which encompasses AI that can engage in conversational interactions. However, OpenAI executives reportedly told staff they're on the verge of reaching Level 2, dubbed "Reasoners."

Bloomberg lists OpenAI's five "Stages of Artificial Intelligence" as follows:

A Level 2 AI system would reportedly be capable of basic problem-solving on par with a human who holds a doctorate degree but lacks access to external tools. During the all-hands meeting, OpenAI leadership reportedly demonstrated a research project using their GPT-4 model that the researchers believe shows signs of approaching this human-like reasoning ability, according to someone familiar with the discussion who spoke with Bloomberg.

The upper levels of OpenAI's classification describe increasingly potent hypothetical AI capabilities. Level 3 "Agents" could work autonomously on tasks for days. Level 4 systems would generate novel innovations. The pinnacle, Level 5, envisions AI managing entire organizations.

This classification system is still a work in progress. OpenAI plans to gather feedback from employees, investors, and board members, potentially refining the levels over time.

Ars Technica asked OpenAI about the ranking system and the accuracy of the Bloomberg report, and a company spokesperson said they had "nothing to add."

OpenAI isn't alone in attempting to quantify levels of AI capabilities. As Bloomberg notes, OpenAI's system feels similar to levels of autonomous driving mapped out by automakers. And in November 2023, researchers at Google DeepMind proposed their own five-level framework for assessing AI advancement, showing that other AI labs have also been trying to figure out how to rank things that don't yet exist.

OpenAI's classification system also somewhat resembles Anthropic's "AI Safety Levels" (ASLs) first published by the maker of the Claude AI assistant in September 2023. Both systems aim to categorize AI capabilities, though they focus on different aspects. Anthropic's ASLs are more explicitly focused on safety and catastrophic risks (such as ASL-2, which refers to "systems that show early signs of dangerous capabilities"), while OpenAI's levels track general capabilities.

However, any AI classification system raises questions about whether it's possible to meaningfully quantify AI progress and what constitutes an advancement (or even what constitutes a "dangerous" AI system, as in the case of Anthropic). The tech industry so far has a history of overpromising AI capabilities, and linear progression models like OpenAI's potentially risk fueling unrealistic expectations.

There is currently no consensus in the AI research community on how to measure progress toward AGI or even if AGI is a well-defined or achievable goal. As such, OpenAI's five-tier system should likely be viewed as a communications tool to entice investors that shows the company's aspirational goals rather than a scientific or even technical measurement of progress.

Read the original post:

OpenAI reportedly nears breakthrough with reasoning AI, reveals progress framework - Ars Technica

ChatGPT maker OpenAI now has a scale to rank its AI – ReadWrite

An OpenAI spokesperson has shared the companys new 5-tier system for ranking its progress toward achieving Artificial General Intelligence (AGI), reports Bloomberg.

The levels, which were announced by the company behind ChatGPT internally at an all-hands meeting before being shared externally, are designed to guide thinking about artificial intelligence (AI) and its capabilities as the company works to develop models with real reasoning capabilities.

The levels in the system were outlined like this:

AGI is the long-term goal for many companies involved in the AI arms race, including Mark Zuckerbergs Meta.

While OpenAI believes they are currently at level 1, their spokesperson said they are on the cusp of reaching the second level, Reasoners.

During the all-hands meeting where the new levels were announced, OpenAI also demonstrated some new research centered around its GPT4 model, which they believe shows skills approaching human-level reasoning.

The levels, which were designed by OpenAIs senior leadership team and executives, are not considered final. As the organization gathers feedback and additional input from its employees and investors, it may alter the levels and definitions over time to better fit the broader understanding of AI progress.

OpenAIs stated mission is to develop safe and beneficial artificial general intelligence for the benefit of humanity, however, earlier this year the company effectively dissolved its safety-oriented Superalignment group after the departure of Chief Scientist and co-founder Ilya Sutskever. This has led to questions being raised about whether the company can truly live up to its mission statement.

Featured image credit: generated with Ideogram

View original post here:

ChatGPT maker OpenAI now has a scale to rank its AI - ReadWrite