Archive for the ‘Artificial General Intelligence’ Category

OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say – Reuters

Nov 22 (Reuters) - Ahead of OpenAI CEO Sam Altmans four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.

The previously unreported letter and AI algorithm were key developments before the board's ouster of Altman, the poster child of generative AI, the two sources said. Prior to his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader.

The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman's firing, among which were concerns over commercializing advances before understanding the consequences. Reuters was unable to review a copy of the letter. The staff who wrote the letter did not respond to requests for comment.

After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staffers a project called Q* and a letter to the board before the weekend's events, one of the people said. An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy.

Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.

Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*s future success, the source said.

Reuters could not independently verify the capabilities of Q* claimed by the researchers.

[1/2]Sam Altman, CEO of ChatGPT maker OpenAI, arrives for a bipartisan Artificial Intelligence (AI) Insight Forum for all U.S. senators hosted by Senate Majority Leader Chuck Schumer (D-NY) at the U.S. Capitol in Washington, U.S., September 13, 2023. REUTERS/Julia Nikhinson/File Photo Acquire Licensing Rights

Researchers consider math to be a frontier of generative AI development. Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely. But conquering the ability to do math where there is only one right answer implies AI would have greater reasoning capabilities resembling human intelligence. This could be applied to novel scientific research, for instance, AI researchers believe.

Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn and comprehend.

In their letter to the board, researchers flagged AIs prowess and potential danger, the sources said without specifying the exact safety concerns noted in the letter. There has long been discussion among computer scientists about the danger posed by highly intelligent machines, for instance if they might decide that the destruction of humanity was in their interest.

Researchers have also flagged work by an "AI scientist" team, the existence of which multiple sources confirmed. The group, formed by combining earlier "Code Gen" and "Math Gen" teams, was exploring how to optimize existing AI models to improve their reasoning and eventually perform scientific work, one of the people said.

Altman led efforts to make ChatGPT one of the fastest growing software applications in history and drew investment - and computing resources - necessary from Microsoft to get closer to AGI.

In addition to announcing a slew of new tools in a demonstration this month, Altman last week teased at a summit of world leaders in San Francisco that he believed major advances were in sight.

"Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I've gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime," he said at the Asia-Pacific Economic Cooperation summit.

A day later, the board fired Altman.

Anna Tong and Jeffrey Dastin in San Francisco and Krystal Hu in New York; Editing by Kenneth Li and Lisa Shumaker

Our Standards: The Thomson Reuters Trust Principles.

Anna Tong is a correspondent for Reuters based in San Francisco, where she reports on the technology industry. She joined Reuters in 2023 after working at the San Francisco Standard as a data editor. Tong previously worked at technology startups as a product manager and at Google where she worked in user insights and helped run a call center. Tong graduated from Harvard University. Contact:4152373211

Jeffrey Dastin is a correspondent for Reuters based in San Francisco, where he reports on the technology industry and artificial intelligence. He joined Reuters in 2014, originally writing about airlines and travel from the New York bureau. Dastin graduated from Yale University with a degree in history. He was part of a team that examined lobbying by Amazon.com around the world, for which he won a SOPA Award in 2022.

Krystal reports on venture capital and startups for Reuters. She covers Silicon Valley and beyond through the lens of money and characters, with a focus on growth-stage startups, tech investments and AI. She has previously covered M&A for Reuters, breaking stories on Trump's SPAC and Elon Musk's Twitter financing. Previously, she reported on Amazon for Yahoo Finance, and her investigation of the company's retail practice was cited by lawmakers in Congress. Krystal started a career in journalism by writing about tech and politics in China. She has a master's degree from New York University, and enjoys a scoop of Matcha ice cream as much as getting a scoop at work.

Link:

OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say - Reuters

What the OpenAI drama means for AI progress and safety – Nature.com

OpenAI fired its charismatic chief executive, Sam Altman, on 17 November but has now reinstated him.Credit: Justin Sullivan/Getty

OpenAI the company behind the blockbuster artificial intelligence (AI) bot ChatGPT has been consumed by frenzied changes for almost a week. On 17 November, the company fired its charismatic chief executive, Sam Altman. Five days, and much drama, later, OpenAI announced that Altman would return with an overhaul of the companys board.

The debacle has thrown the spotlight on an ongoing debate about how commercial competition is shaping the development of AI systems, and how quickly AI can be deployed ethically and safely.

The push to retain dominance is leading to toxic competition. Its a race to the bottom, says Sarah Myers West, managing director of the AI Now Institute, a policy-research organization based in New York City.

Altman, a successful investor and entrepreneur, was a co-founder of OpenAI and its public face. He had been chief executive since 2019, and oversaw an investment of some US$13 billion from Microsoft. After Altmans initial ousting, Microsoft, which uses OpenAI technology to power its search engine Bing, offered Altman a job leading a new advanced AI research team. Altmans return to OpenAI came after hundreds of company employees signed a letter threatening to follow Altman to Microsoft unless he was reinstated.

The OpenAI board that ousted Altman last week did not give detailed reasons for the decision, saying at first that he was fired because he was not consistently candid in his communications with the board and later adding that the decision had nothing to do with malfeasance or anything related to our financial, business, safety or security/privacy practice.

But some speculate that the firing might have its origins in a reported schism at OpenAI between those focused on commercial growth and those uncomfortable with the strain of rapid development and its possible impacts on the companys mission to ensure that artificial general intelligence benefits all of humanity.

OpenAI, which is based in San Francisco, California, was founded in 2015 as a non-profit organization. In 2019, it shifted to an unusual capped-profit model, with a board explicitly not accountable to shareholders or investors, including Microsoft. In the background of Altmans firing is very clearly a conflict between the non-profit and the capped-profit; a conflict of culture and aims, says Jathan Sadowski, a social scientist of technology at Monash University in Melbourne, Australia.

Ilya Sutskever, OpenAIs chief scientist and a member of the board that ousted Altman, this July shifted his focus to superalignment, a four-year project attempting to ensure that future superintelligences work for the good of humanity.

Its unclear whether Altman and Sutskever are at odds about speed of development: after the board fired Altman, Sutskever expressed regret about the impacts of his actions and was among the employees who signed the letter threatening to leave unless Altman returned.

With Altman back, OpenAI has reshuffled its board: Sutskever and Helen Toner, a researcher in AI governance and safety at Georgetown Universitys Center for Security and Emerging Technology in Washington DC, are no longer on the board. The new board members include Bret Taylor, who is on the board of e-commerce platform Shopify and used to lead the software company Salesforce.

It seems likely that OpenAI will shift further from its non-profit origins, says Sadowski, restructuring as a classic profit-driven Silicon Valley tech company.

OpenAI released ChatGPT almost a year ago, catapulting the company to worldwide fame. The bot was based on the companys GPT-3.5 large language model (LLM), which uses the statistical correlations between words in billions of training sentences to generate fluent responses to prompts. The breadth of capabilities that have emerged from this technique (including what some see as logical reasoning) has astounded and worried scientists and the general public alike.

OpenAI is not alone in pursuing large language models, but the release of ChatGPT probably pushed others to deployment: Google launched its chatbot Bard in March 2023, the same month that an updated version of ChatGPT, based on GPT-4, was released. West worries that products are appearing before anyone has a full understanding of their behaviour, uses and misuses, and that this could be detrimental for society.

The competitive landscape for conversational AI is heating up. Google has hinted that more AI products lie ahead. Amazon has its own AI offering, Titan. Smaller companies that aim to compete with ChatGPT include the German effort Aleph Alpha and US-based Anthropic, founded in 2021 by former OpenAI employees, which released the chatbot Claude 2.1 on 21 November. Stability AI and Cohere are other often-cited rivals.

West notes that these start-ups rely heavily on the vast and expensive computing resources provided by just three companies Google, Microsoft and Amazon potentially creating a race for dominance between these controlling giants.

Computer scientist Geoffrey Hinton at the University of Toronto in Canada, a pioneer of deep learning, is deeply concerned about the speed of AI development. If you specify a competition to make a car go as fast as possible, the first thing you do is remove the brakes, he says. (Hinton declined to comment to Nature on the events at OpenAI since 17 November.)

OpenAI was founded with the specific goal of developing an artificial general intelligence (AGI) a deep-learning system thats trained not just to be good at one specific thing, but to be as generally smart as a person. It remains unclear whether AGI is even possible. The jury is very much out on that front, says West. But some are starting to bet on it. Hinton says he used to think AGI would happen on the timescale of 30, 50 or maybe 100 years. Right now, I think well probably get it in 520 years, he says.

The imminent dangers of AI are related to it being used as a tool by human bad actors people who use it to, for example, create misinformation, commit scams or, potentially, invent new bioterrorism weapons1. And because todays AI systems work by finding patterns in existing data, they also tend to reinforce historical biases and social injustices, says West.

In the long term, Hinton and others worry about an AI system itself becoming a bad actor, developing sufficient agency to guide world events in a negative direction. This could arise even if an AGI was designed in line with OpenAIs superalignment mission to promote humanitys best interests, says Hinton. It might decide, for example, that the weight of human suffering is so vast that it would be better for humanity to die than to face further misery. Such statements sound like science fiction, but Hinton argues that the existential threat of an AI that cant be turned off and veers onto a destructive path is very real.

The AI Safety Summit hosted by the United Kingdom in November was designed to get ahead of such concerns. So far, some two dozen nations have agreed to work together on the problem, although what exactly they will do remains unclear.

West emphasizes that its important to focus on already-present threats from AI ahead of far-flung concerns and to ensure that existing laws are applied to tech companies developing AI. The events at OpenAI, she says, highlight how just a few companies with the money and computing resources to feed AI wield a lot of power something she thinks needs more scrutiny from anti-trust regulators. Regulators for a very long time have taken a very light touch with this market, says West. We need to start by enforcing the laws we have right now.

Follow this link:

What the OpenAI drama means for AI progress and safety - Nature.com

The fallout from the weirdness at OpenAI – The Economist

Listen to this story. Enjoy more audio and podcasts on iOS or Android.

Your browser does not support the

Five very weird days passed before it seemed that Sam Altman would stay at OpenAI after all. On November 17th the board of the maker of Chatgpt suddenly booted out its chief executive. On the 19th it looked as if Mr Altman would move to Microsoft, OpenAIs largest investor. But employees at the startup rose up in revolt, with almost all of them, including one of the boards original conspirators, threatening to leave were Mr Altman not reinstated. Between frantic meetings, the top brass tweeted heart emojis and fond messages to each other. By the 21st, things had come full circle .

All this seems stranger still considering that these shenanigans were taking place at the worlds hottest startup, which had been expected to reach a valuation of nearly $90bn. In part, the weirdness is a sign of just how quickly the relatively young technology of generative artificial intelligence has been catapulted to glory. But it also holds deeper and more disturbing lessons.

One is the sheer power of ai talent. As the employees threatened to quit, the message OpenAI is nothing without its people rang out on social media. Ever since ChatGPTs launch a year ago, demand for ai brains has been white-hot. As chaos reigned, both Microsoft and other tech firms stood ready to welcome disgruntled staff with open arms. That gave both Mr Altman and Openais programmers huge bargaining power and fatally undermined the boards attempts to exert control.

The episode also shines a light on the unusual structure of Openai. It was founded in 2015 as a non-profit research lab aimed at safely developing artificial general intelligence (agi), which can equal or surpass humans in all types of thinking. But it soon became clear that this would require vast amounts of expensive processing power, if it were possible at all. To pay for it, a profit-making subsidiary was set up to sell AI tools, such as ChatGPT. And Microsoft invested $13bn in return for a 49% stake.

On paper, the power remained with the non-profits board, whose aim is to ensure that agi benefits everyone, and whose responsibility is accordingly not to shareholders but to humanity. That illusion was shattered as the employees demanded Mr Altmans return, and as the prospect loomed of a rival firm housed within profit-maximising Microsoft.

The chief lesson is the folly of solely relying on corporate structures to police technology. As the potential of generative ai became clear, the contradictions in OpenAIs structure were exposed. A single outfit cannot strike the best balance between advancing AI, attracting talent and investment, assessing AIs threats and keeping humanity safe. Conflicts of interest in Silicon Valley are hardly rare. Even if the people at OpenAI were as brilliant as they think they are, the task would be beyond them.

Much about the boards motives in sacking Mr Altman remains unknown. Even if the directors did genuinely have humanitys interest at heart, they risked seeing investors and employees flock to another firm that would charge ahead with the technology regardless. Nor is it entirely clear what qualifies a handful of private citizens to represent the interests of Earths remaining 7.9bn inhabitants. As part of Mr Altmans return, a new board is being appointed. It will include Larry Summers, a prominent economist; an executive from Microsoft will probably join him, as may Mr Altman.

Yet personnel changes are not enough: the firms structure should also be overhauled. Fortunately, in America there is a body that has a much more convincing claim to represent the common interest: the government. By drafting regulation, it can set the boundaries within which companies like Openai must operate. And, as a flurry of activity in the past month shows, politicians are watching ai. That is just as well. The technology is too important to be left to the whims of corporate plotters.

Read more of our articles onartificial intelligence

Continue reading here:

The fallout from the weirdness at OpenAI - The Economist

How an ‘internet of AIs’ will take artificial intelligence to the next level – Cointelegraph

HyperCycle is a decentralized network that connects AI machines to make them smarter and more profitable. It enables companies of all sizes to participate in the emerging AI computing economy.

Artificial intelligence (AI) is a rapidly evolving field that seems likely to fall into the hands of major companies or organizations with nationally driven budgets. One might think that only these have the massive financial resources to generate the computing power to train and ultimately own AI.

Recent events at OpenAI, a developer of the AI chatbot ChatGPT, highlight the challenges of centralized AI development. The firing of CEO Sam Altman and the resignation of co-founder Greg Brockman raise questions about governance and decision-making in centralized AI entities and highlight the need for a more decentralized approach. Srinivasan Balaji, a former chief technology officer at Coinbase, has become a staunch proponent for increased transparency in the realm of AI, advocating for the adoption of decentralized AI systems.

In addition to centralization, theres a lot of fragmentation in the AI space, meaning cutting-edge systems are unable to communicate with one another. Moreover, a high degree of centralization brings considerable security risks and reliability issues. Plus, given the vast amounts of computing power needed, efficiency and speed are key.

To achieve the full potential of AI that answers to all of humanity, we need a different approach one that decentralizes AI and allows AI systems to communicate with each other, eliminating the need for intermediaries. This would increase AI systems time to market, intelligence and profitability. While many systems are currently specialized in specific tasks, such as voice or facial recognition, a future shift to artificial general intelligence could allow one system to undertake a wide range of tasks simultaneously by delegating those tasks to multiple AIs.

As mentioned above, currently, the AI industry is dominated by large corporations and institutional investors, making it difficult for individuals to participate. HyperCycle, a novel ledgerless blockchain architecture, emerges as a transformative solution, aiming to democratize AI by establishing a fast and secure network that empowers everyone, from large enterprises to individuals, to contribute to AI computing.

HyperCycle is powered by a layer 0++ blockchain technology that enables rapid, cost-effective microtransactions between diverse AI agents interconnected to each other, and collectively solving problems.

This internet of AIs allows systems to interact and collaborate directly without intermediaries. It addresses the challenges of overcoming the slow, costly processes of the siloed AI landscape.

This is particularly timely, as the number of machine-to-machine (M2M) connections globally is increasing rapidly.

For instance, existing companies could interact with HyperCycles AIs specializing in IoT, blockchain, and supply chain management to optimize logistics for clients, predict maintenance before breakdowns occur, and ensure seamless data integrity. By enabling this interconnected ecosystem of decentralized A Is, HyperCycle can lead to operational efficiency and innovation in service offerings.

HyperCycle has also partnered with Penguin Digital to create HyperPG, a service that connects all the network beneficiaries together. HyperPG uses Paraguays abundant hydropower to provide a green and efficient source of energy for AI computing.

One of HyperCycles key features is the HyperAiBox, a plug-and-play device that allows individuals and organizations to perform AI computations at home and reduces their reliance on large corporations with vast data centers. The compact box is about the size of a modem, has a touchscreen, and allows nodes to be operated from home and network participants to be compensated for the resources they provide to the network. It is also a low-power solution.

The launch of HyperCycles mainnet, ahead of schedule, highlights the networks rapid growth. Currently, over 59,000 initial nodes are providing Uptime to the network by covering operational expenses. An additional 230,000 single licenses will soon join the ecosystem. This expansion indicates a strong demand for over 295 million HyPC tokens, reflecting the networks engagement and growth.

The three key metrics of Uptime, Computation, and Reputation incentivize node operators to maintain high standards, ensuring a stable, secure, and decentralized network environment.

Since June 2023, HyperCycles network has been operational, scaling up as demand increases. Source: HyperCycle

AI remains at a nascent stage, but HyperCycles goal is to anticipate the challenges that might stand in this technologys way and break down barriers to entry, making AI more accessible and affordable to everyone.

Disclaimer. Cointelegraph does not endorse any content or product on this page. While we aim at providing you with all important information that we could obtain in this sponsored article, readers should do their own research before taking any actions related to the company and carry full responsibility for their decisions, nor can this article be considered as investment advice.

Read more:

How an 'internet of AIs' will take artificial intelligence to the next level - Cointelegraph

OpenAI Is Seeking Additional Investment in Artificial General … – AiThority

OpenAI is seeking the support of its most significant benefactor

Technical advancements are becoming increasingly vital in determining the course of B2B payments. Supporting businesses with advanced delivery models incorporating a variety of payment methods, including card-not-present transactions, electronic invoices, and omnichannel experiences, in addition to addressing the perennial B2B frictions inherent in cross-border payments, are critical areas of innovation within AP and AR processes.

However, he noted that in B2B, security and certainty of payments are becoming more important than payment speed. As a result, real-time payments and ACH are becoming more appealing than paper checks. And despite the continued prevalence of net terms in payments for small to medium-sized businesses (SMBs) and mid-market business-to-business (B2B), innovation is producing alternatives such as dynamic payment terms and pricing models.

In an interview, Sam Altman, the chief executive officer of the artificial intelligence (AI) firm, revealed his intentions to obtain further financial support from Microsoft. Microsoft has already committed $10 billion to finance AGI, software designed to emulate human intelligence. Altman stated that his companys collaboration with Microsoft and its CEO Satya Nadella was extremely fruitful and that he anticipated raising a substantial amount more over time from Microsoft and other investors to cover the expenses associated with developing more complex AI models. When asked whether Microsoft would persist, Altman responded, I certainly hope so. There is still much computing to develop between now and AGI, he continued. Training costs are simply enormous. Following last weeks Developers Day, where OpenAI unveiled a marketplace showcasing its finest applications and a suite of new tools and enhancements to its GPT-4, as well as a revenue-sharing model with the most popular GPT creators, he made these remarks.

In the interim, PYMNTS has recently examined the obstacles the government faces in its efforts to regulate AI. Comprehension of the technologys operation and acquisition of the requisite expertise to supervise it are among the most urgent matters.

In contrast to historical AI implementations such as machine learning and predictive forecasting, which have become ubiquitous in various aspects of daily life, generative AI capabilities introduce a novel approach to automating and producing outputs in domains such as investment research, risk management, trading, and fraud detection.

Read the Latest blog from us: AI And Cloud- The Perfect Match

Additionally, recognizing the intricacy of ostensibly straightforward matters can yield advantageous outcomes in the long run. Furthermore, it is worth noting that the priorities of organizations operating in the B2B payments sector are influenced by macroeconomic factors, especially considering the current prolonged economic expansion. A growing number of developments in the payments industry are conforming to these priorities above.

In addition, organizations are progressively seeking vendor consolidation as a means to mitigate overall risk by restricting the number of technology vendors that interact with their ecosystem, according to Weiner. Furthermore, he noted that CTOs and CFOs are collaborating more frequently on B2B transformations. The advent of digital payments has resulted in enhanced transparency and instantaneous understanding of financial activities. Weiner, on the other hand, believes that while real-time payments offer efficiency and security benefits, they may not be a game-changer in B2B payments, where the majority of transactions are conducted on net terms.

Read:AI and Machine Learning Are Changing Business Forever

[To share your insights with us, please write tosghosh@martechseries.com]

Read more here:

OpenAI Is Seeking Additional Investment in Artificial General ... - AiThority