Archive for the ‘Artificial General Intelligence’ Category

What the OpenAI drama means for AI progress and safety – Nature.com

OpenAI fired its charismatic chief executive, Sam Altman, on 17 November but has now reinstated him.Credit: Justin Sullivan/Getty

OpenAI the company behind the blockbuster artificial intelligence (AI) bot ChatGPT has been consumed by frenzied changes for almost a week. On 17 November, the company fired its charismatic chief executive, Sam Altman. Five days, and much drama, later, OpenAI announced that Altman would return with an overhaul of the companys board.

The debacle has thrown the spotlight on an ongoing debate about how commercial competition is shaping the development of AI systems, and how quickly AI can be deployed ethically and safely.

The push to retain dominance is leading to toxic competition. Its a race to the bottom, says Sarah Myers West, managing director of the AI Now Institute, a policy-research organization based in New York City.

Altman, a successful investor and entrepreneur, was a co-founder of OpenAI and its public face. He had been chief executive since 2019, and oversaw an investment of some US$13 billion from Microsoft. After Altmans initial ousting, Microsoft, which uses OpenAI technology to power its search engine Bing, offered Altman a job leading a new advanced AI research team. Altmans return to OpenAI came after hundreds of company employees signed a letter threatening to follow Altman to Microsoft unless he was reinstated.

The OpenAI board that ousted Altman last week did not give detailed reasons for the decision, saying at first that he was fired because he was not consistently candid in his communications with the board and later adding that the decision had nothing to do with malfeasance or anything related to our financial, business, safety or security/privacy practice.

But some speculate that the firing might have its origins in a reported schism at OpenAI between those focused on commercial growth and those uncomfortable with the strain of rapid development and its possible impacts on the companys mission to ensure that artificial general intelligence benefits all of humanity.

OpenAI, which is based in San Francisco, California, was founded in 2015 as a non-profit organization. In 2019, it shifted to an unusual capped-profit model, with a board explicitly not accountable to shareholders or investors, including Microsoft. In the background of Altmans firing is very clearly a conflict between the non-profit and the capped-profit; a conflict of culture and aims, says Jathan Sadowski, a social scientist of technology at Monash University in Melbourne, Australia.

Ilya Sutskever, OpenAIs chief scientist and a member of the board that ousted Altman, this July shifted his focus to superalignment, a four-year project attempting to ensure that future superintelligences work for the good of humanity.

Its unclear whether Altman and Sutskever are at odds about speed of development: after the board fired Altman, Sutskever expressed regret about the impacts of his actions and was among the employees who signed the letter threatening to leave unless Altman returned.

With Altman back, OpenAI has reshuffled its board: Sutskever and Helen Toner, a researcher in AI governance and safety at Georgetown Universitys Center for Security and Emerging Technology in Washington DC, are no longer on the board. The new board members include Bret Taylor, who is on the board of e-commerce platform Shopify and used to lead the software company Salesforce.

It seems likely that OpenAI will shift further from its non-profit origins, says Sadowski, restructuring as a classic profit-driven Silicon Valley tech company.

OpenAI released ChatGPT almost a year ago, catapulting the company to worldwide fame. The bot was based on the companys GPT-3.5 large language model (LLM), which uses the statistical correlations between words in billions of training sentences to generate fluent responses to prompts. The breadth of capabilities that have emerged from this technique (including what some see as logical reasoning) has astounded and worried scientists and the general public alike.

OpenAI is not alone in pursuing large language models, but the release of ChatGPT probably pushed others to deployment: Google launched its chatbot Bard in March 2023, the same month that an updated version of ChatGPT, based on GPT-4, was released. West worries that products are appearing before anyone has a full understanding of their behaviour, uses and misuses, and that this could be detrimental for society.

The competitive landscape for conversational AI is heating up. Google has hinted that more AI products lie ahead. Amazon has its own AI offering, Titan. Smaller companies that aim to compete with ChatGPT include the German effort Aleph Alpha and US-based Anthropic, founded in 2021 by former OpenAI employees, which released the chatbot Claude 2.1 on 21 November. Stability AI and Cohere are other often-cited rivals.

West notes that these start-ups rely heavily on the vast and expensive computing resources provided by just three companies Google, Microsoft and Amazon potentially creating a race for dominance between these controlling giants.

Computer scientist Geoffrey Hinton at the University of Toronto in Canada, a pioneer of deep learning, is deeply concerned about the speed of AI development. If you specify a competition to make a car go as fast as possible, the first thing you do is remove the brakes, he says. (Hinton declined to comment to Nature on the events at OpenAI since 17 November.)

OpenAI was founded with the specific goal of developing an artificial general intelligence (AGI) a deep-learning system thats trained not just to be good at one specific thing, but to be as generally smart as a person. It remains unclear whether AGI is even possible. The jury is very much out on that front, says West. But some are starting to bet on it. Hinton says he used to think AGI would happen on the timescale of 30, 50 or maybe 100 years. Right now, I think well probably get it in 520 years, he says.

The imminent dangers of AI are related to it being used as a tool by human bad actors people who use it to, for example, create misinformation, commit scams or, potentially, invent new bioterrorism weapons1. And because todays AI systems work by finding patterns in existing data, they also tend to reinforce historical biases and social injustices, says West.

In the long term, Hinton and others worry about an AI system itself becoming a bad actor, developing sufficient agency to guide world events in a negative direction. This could arise even if an AGI was designed in line with OpenAIs superalignment mission to promote humanitys best interests, says Hinton. It might decide, for example, that the weight of human suffering is so vast that it would be better for humanity to die than to face further misery. Such statements sound like science fiction, but Hinton argues that the existential threat of an AI that cant be turned off and veers onto a destructive path is very real.

The AI Safety Summit hosted by the United Kingdom in November was designed to get ahead of such concerns. So far, some two dozen nations have agreed to work together on the problem, although what exactly they will do remains unclear.

West emphasizes that its important to focus on already-present threats from AI ahead of far-flung concerns and to ensure that existing laws are applied to tech companies developing AI. The events at OpenAI, she says, highlight how just a few companies with the money and computing resources to feed AI wield a lot of power something she thinks needs more scrutiny from anti-trust regulators. Regulators for a very long time have taken a very light touch with this market, says West. We need to start by enforcing the laws we have right now.

Follow this link:

What the OpenAI drama means for AI progress and safety - Nature.com

The fallout from the weirdness at OpenAI – The Economist

Listen to this story. Enjoy more audio and podcasts on iOS or Android.

Your browser does not support the

Five very weird days passed before it seemed that Sam Altman would stay at OpenAI after all. On November 17th the board of the maker of Chatgpt suddenly booted out its chief executive. On the 19th it looked as if Mr Altman would move to Microsoft, OpenAIs largest investor. But employees at the startup rose up in revolt, with almost all of them, including one of the boards original conspirators, threatening to leave were Mr Altman not reinstated. Between frantic meetings, the top brass tweeted heart emojis and fond messages to each other. By the 21st, things had come full circle .

All this seems stranger still considering that these shenanigans were taking place at the worlds hottest startup, which had been expected to reach a valuation of nearly $90bn. In part, the weirdness is a sign of just how quickly the relatively young technology of generative artificial intelligence has been catapulted to glory. But it also holds deeper and more disturbing lessons.

One is the sheer power of ai talent. As the employees threatened to quit, the message OpenAI is nothing without its people rang out on social media. Ever since ChatGPTs launch a year ago, demand for ai brains has been white-hot. As chaos reigned, both Microsoft and other tech firms stood ready to welcome disgruntled staff with open arms. That gave both Mr Altman and Openais programmers huge bargaining power and fatally undermined the boards attempts to exert control.

The episode also shines a light on the unusual structure of Openai. It was founded in 2015 as a non-profit research lab aimed at safely developing artificial general intelligence (agi), which can equal or surpass humans in all types of thinking. But it soon became clear that this would require vast amounts of expensive processing power, if it were possible at all. To pay for it, a profit-making subsidiary was set up to sell AI tools, such as ChatGPT. And Microsoft invested $13bn in return for a 49% stake.

On paper, the power remained with the non-profits board, whose aim is to ensure that agi benefits everyone, and whose responsibility is accordingly not to shareholders but to humanity. That illusion was shattered as the employees demanded Mr Altmans return, and as the prospect loomed of a rival firm housed within profit-maximising Microsoft.

The chief lesson is the folly of solely relying on corporate structures to police technology. As the potential of generative ai became clear, the contradictions in OpenAIs structure were exposed. A single outfit cannot strike the best balance between advancing AI, attracting talent and investment, assessing AIs threats and keeping humanity safe. Conflicts of interest in Silicon Valley are hardly rare. Even if the people at OpenAI were as brilliant as they think they are, the task would be beyond them.

Much about the boards motives in sacking Mr Altman remains unknown. Even if the directors did genuinely have humanitys interest at heart, they risked seeing investors and employees flock to another firm that would charge ahead with the technology regardless. Nor is it entirely clear what qualifies a handful of private citizens to represent the interests of Earths remaining 7.9bn inhabitants. As part of Mr Altmans return, a new board is being appointed. It will include Larry Summers, a prominent economist; an executive from Microsoft will probably join him, as may Mr Altman.

Yet personnel changes are not enough: the firms structure should also be overhauled. Fortunately, in America there is a body that has a much more convincing claim to represent the common interest: the government. By drafting regulation, it can set the boundaries within which companies like Openai must operate. And, as a flurry of activity in the past month shows, politicians are watching ai. That is just as well. The technology is too important to be left to the whims of corporate plotters.

Read more of our articles onartificial intelligence

Continue reading here:

The fallout from the weirdness at OpenAI - The Economist

How an ‘internet of AIs’ will take artificial intelligence to the next level – Cointelegraph

HyperCycle is a decentralized network that connects AI machines to make them smarter and more profitable. It enables companies of all sizes to participate in the emerging AI computing economy.

Artificial intelligence (AI) is a rapidly evolving field that seems likely to fall into the hands of major companies or organizations with nationally driven budgets. One might think that only these have the massive financial resources to generate the computing power to train and ultimately own AI.

Recent events at OpenAI, a developer of the AI chatbot ChatGPT, highlight the challenges of centralized AI development. The firing of CEO Sam Altman and the resignation of co-founder Greg Brockman raise questions about governance and decision-making in centralized AI entities and highlight the need for a more decentralized approach. Srinivasan Balaji, a former chief technology officer at Coinbase, has become a staunch proponent for increased transparency in the realm of AI, advocating for the adoption of decentralized AI systems.

In addition to centralization, theres a lot of fragmentation in the AI space, meaning cutting-edge systems are unable to communicate with one another. Moreover, a high degree of centralization brings considerable security risks and reliability issues. Plus, given the vast amounts of computing power needed, efficiency and speed are key.

To achieve the full potential of AI that answers to all of humanity, we need a different approach one that decentralizes AI and allows AI systems to communicate with each other, eliminating the need for intermediaries. This would increase AI systems time to market, intelligence and profitability. While many systems are currently specialized in specific tasks, such as voice or facial recognition, a future shift to artificial general intelligence could allow one system to undertake a wide range of tasks simultaneously by delegating those tasks to multiple AIs.

As mentioned above, currently, the AI industry is dominated by large corporations and institutional investors, making it difficult for individuals to participate. HyperCycle, a novel ledgerless blockchain architecture, emerges as a transformative solution, aiming to democratize AI by establishing a fast and secure network that empowers everyone, from large enterprises to individuals, to contribute to AI computing.

HyperCycle is powered by a layer 0++ blockchain technology that enables rapid, cost-effective microtransactions between diverse AI agents interconnected to each other, and collectively solving problems.

This internet of AIs allows systems to interact and collaborate directly without intermediaries. It addresses the challenges of overcoming the slow, costly processes of the siloed AI landscape.

This is particularly timely, as the number of machine-to-machine (M2M) connections globally is increasing rapidly.

For instance, existing companies could interact with HyperCycles AIs specializing in IoT, blockchain, and supply chain management to optimize logistics for clients, predict maintenance before breakdowns occur, and ensure seamless data integrity. By enabling this interconnected ecosystem of decentralized A Is, HyperCycle can lead to operational efficiency and innovation in service offerings.

HyperCycle has also partnered with Penguin Digital to create HyperPG, a service that connects all the network beneficiaries together. HyperPG uses Paraguays abundant hydropower to provide a green and efficient source of energy for AI computing.

One of HyperCycles key features is the HyperAiBox, a plug-and-play device that allows individuals and organizations to perform AI computations at home and reduces their reliance on large corporations with vast data centers. The compact box is about the size of a modem, has a touchscreen, and allows nodes to be operated from home and network participants to be compensated for the resources they provide to the network. It is also a low-power solution.

The launch of HyperCycles mainnet, ahead of schedule, highlights the networks rapid growth. Currently, over 59,000 initial nodes are providing Uptime to the network by covering operational expenses. An additional 230,000 single licenses will soon join the ecosystem. This expansion indicates a strong demand for over 295 million HyPC tokens, reflecting the networks engagement and growth.

The three key metrics of Uptime, Computation, and Reputation incentivize node operators to maintain high standards, ensuring a stable, secure, and decentralized network environment.

Since June 2023, HyperCycles network has been operational, scaling up as demand increases. Source: HyperCycle

AI remains at a nascent stage, but HyperCycles goal is to anticipate the challenges that might stand in this technologys way and break down barriers to entry, making AI more accessible and affordable to everyone.

Disclaimer. Cointelegraph does not endorse any content or product on this page. While we aim at providing you with all important information that we could obtain in this sponsored article, readers should do their own research before taking any actions related to the company and carry full responsibility for their decisions, nor can this article be considered as investment advice.

Read more:

How an 'internet of AIs' will take artificial intelligence to the next level - Cointelegraph

OpenAI Is Seeking Additional Investment in Artificial General … – AiThority

OpenAI is seeking the support of its most significant benefactor

Technical advancements are becoming increasingly vital in determining the course of B2B payments. Supporting businesses with advanced delivery models incorporating a variety of payment methods, including card-not-present transactions, electronic invoices, and omnichannel experiences, in addition to addressing the perennial B2B frictions inherent in cross-border payments, are critical areas of innovation within AP and AR processes.

However, he noted that in B2B, security and certainty of payments are becoming more important than payment speed. As a result, real-time payments and ACH are becoming more appealing than paper checks. And despite the continued prevalence of net terms in payments for small to medium-sized businesses (SMBs) and mid-market business-to-business (B2B), innovation is producing alternatives such as dynamic payment terms and pricing models.

In an interview, Sam Altman, the chief executive officer of the artificial intelligence (AI) firm, revealed his intentions to obtain further financial support from Microsoft. Microsoft has already committed $10 billion to finance AGI, software designed to emulate human intelligence. Altman stated that his companys collaboration with Microsoft and its CEO Satya Nadella was extremely fruitful and that he anticipated raising a substantial amount more over time from Microsoft and other investors to cover the expenses associated with developing more complex AI models. When asked whether Microsoft would persist, Altman responded, I certainly hope so. There is still much computing to develop between now and AGI, he continued. Training costs are simply enormous. Following last weeks Developers Day, where OpenAI unveiled a marketplace showcasing its finest applications and a suite of new tools and enhancements to its GPT-4, as well as a revenue-sharing model with the most popular GPT creators, he made these remarks.

In the interim, PYMNTS has recently examined the obstacles the government faces in its efforts to regulate AI. Comprehension of the technologys operation and acquisition of the requisite expertise to supervise it are among the most urgent matters.

In contrast to historical AI implementations such as machine learning and predictive forecasting, which have become ubiquitous in various aspects of daily life, generative AI capabilities introduce a novel approach to automating and producing outputs in domains such as investment research, risk management, trading, and fraud detection.

Read the Latest blog from us: AI And Cloud- The Perfect Match

Additionally, recognizing the intricacy of ostensibly straightforward matters can yield advantageous outcomes in the long run. Furthermore, it is worth noting that the priorities of organizations operating in the B2B payments sector are influenced by macroeconomic factors, especially considering the current prolonged economic expansion. A growing number of developments in the payments industry are conforming to these priorities above.

In addition, organizations are progressively seeking vendor consolidation as a means to mitigate overall risk by restricting the number of technology vendors that interact with their ecosystem, according to Weiner. Furthermore, he noted that CTOs and CFOs are collaborating more frequently on B2B transformations. The advent of digital payments has resulted in enhanced transparency and instantaneous understanding of financial activities. Weiner, on the other hand, believes that while real-time payments offer efficiency and security benefits, they may not be a game-changer in B2B payments, where the majority of transactions are conducted on net terms.

Read:AI and Machine Learning Are Changing Business Forever

[To share your insights with us, please write tosghosh@martechseries.com]

Read more here:

OpenAI Is Seeking Additional Investment in Artificial General ... - AiThority

Top AI researcher launches new Alberta lab with Huawei funds after … – The Globe and Mail

Open this photo in gallery:

Richard Sutton, a computer scientist and well known AI researcher, at home in Edmonton, Alta., on Nov. 23. Prof. Sutton is launching a new AI research institute in Edmonton with funding from Huawei.Amber Bracken/The Globe and Mail

One of the countrys most accomplished artificial intelligence researchers is launching a new non-profit lab with $4.8-million in funding from Huawei Canada, after the federal government restricted the Chinese companys ability to work with publicly funded universities.

Richard Sutton, a professor at the University of Alberta and a pioneer in the field of reinforcement learning, says the Openmind Research Institute will fund researchers following the Alberta Plan, a 12-step guide he co-authored last year that lays out a framework for pursuing the development of AI agents capable of human-level intelligence.

Openmind will be based in Edmonton and kicks off Friday with a weekend retreat in Banff.

Canada banned the use of equipment from Huawei in 5G networks last year, citing the company as a security risk because of its connections to the Chinese government, which could use the company for espionage. Huawei has long denied the accusation.

Jim Hinton, a Waterloo, Ont.-based patent lawyer and senior fellow at the Centre for International Governance Innovation, said Huaweis involvement with Openmind raises concerns. Even if the money is coming with as little strings attached as possible, there is still soft power that is being wielded, he said. The fact that theyre holding the purse strings gives a significant amount of control.

In 2021, Ottawa started restricting funding for research collaborations between publicly funded universities and entities with links to countries considered national security risks, including China. Alberta has implemented similar restrictions for sensitive research at a provincial level. Artificial intelligence is particularly sensitive because the technology has military applications and can be used for nefarious purposes.

I hope that it could counter that narrative and be an example of how things could be really good, Prof. Sutton said of Openmind and Huaweis funding. This is a case where the interaction with China has been really productive, really valuable in contributing to open AI research in Canada.

All of the work done by Openmind, which is separate from Prof. Suttons role at the University of Alberta, will be open-source, and the institute will not pursue intellectual property rights.

Nor will Huawei. I was a little bit surprised that they were willing to do something so open and with no attempt at control, said Prof. Sutton, who has a long-standing relationship with Huawei in Alberta.

Huawei did not respond to requests for comment.

Although the Chinese company has been shut out of 5G networks and restricted in working with universities in Canada, it can still work directly with individual researchers.

Companies linked to Chinas military, like Huawei is, will try to find other ways around the federal rules, including directly funding researchers outside university institutions. It appears Huawei is doing exactly that, said Margaret McCuaig-Johnston, a senior fellow at the Institute for Science, Society and Policy at the University of Ottawa. China pushes the envelope as far as they can.

Prof. Sutton wrote the textbook literally on reinforcement learning, which is an approach to developing AI agents capable of performing actions in an environment to achieve a goal. Reinforcement learning is everywhere in the world of AI, including in autonomous vehicles and in how chatbots such as ChatGPT are polished to sound more human.

Born in the United States, Prof. Sutton completed a PhD at the University of Massachusetts in 1984 and worked in industry before returning to academia. He joined the University of Alberta in 2003, where he founded the Reinforcement Learning and Artificial Intelligence Lab. He left the U.S. for Canada partly because of his opposition to the politics of former president George W. Bush and the countrys military campaigns abroad.

Alphabet Inc. tapped him in 2017 to lead the companys AI research office in Edmonton through its DeepMind subsidiary, but shut it down in January as part of a company-wide restructuring.

The closing left Prof. Sutton with unfinished business, in a sense. His goal is to understand intelligence, as he puts it, a necessary undertaking if we are to build truly intelligent agents. His work at the university is one avenue to pursue that goal, as is his recent post with Keen Technologies, a U.S. AI startup founded by former Meta Platforms Inc. consulting chief technology officer John Carmack. Keen raised US$20-million last year, including from Shopify founder Tobi Ltke.

Openmind is one more way to pursue that goal, Prof. Sutton said. Although large language models, which power chatbots like ChatGPT, have garnered a lot of attention, he isnt particularly interested in them. Its a good, useful thing, but its kind of a distraction, he said.

He is far more interested in building AI applications capable of complex decision-making and achieving goals, which many refer to as artificial general intelligence, or AGI. I imagine machines doing all the different kinds of things that people do, he said. They will interact and find, just like people do, that the best way to get ahead is to work with other people.

Prof. Sutton will sit on the Openmind governing board along with University of Alberta computer science professor Randy Goebel and Joseph Modayil, who previously worked at DeepMind. Mr. Modayil is also Openminds research director.

Understanding the mind is a grand scientific challenge that has driven my work for more than two decades, he said in an e-mail.

A committee that includes Alberta Plan co-authors and U of A professors Michael Bowling and Patrick Pilarski will select the research fellows. Openminds research agenda will be set independently from its funding sources, according to a backgrounder on the institute provided by Prof. Sutton.

The briefing also notes that Openmind researchers will be natural candidates for founding startups and commercializing research outside the non-profit. Although there may be no legal obligation for an Openmind researcher to work with Openmind donors, familiarity, trust, and consilient perspectives would make this a likely outcome, according to the backgrounder.

The backing from Huawei puts the company in a better position to work with Openmind talent, Mr. Hinton said. Even though the research will be open-source, foreign multinational companies such as Huawei are often more equipped to capitalize on it than Canadian firms, which have a poor track record of protecting intellectual property and capturing the economic benefits that come with innovation.

Canadian governments review transactions involving foreign companies and physical assets, such as mines, to ensure the domestic economy benefits. But they fall short with IP. When it comes to intangible assets, we dont understand how that works, Mr. Hinton said.

Prof. Sutton is a big proponent of open-source and has a dim view of IP, saying that the focus on ownership can slow down innovation. You are interacting with lawyers and spending a lot of time and money on things that arent advancing the research, he said. It just doesnt seem like its worked at all for computer science IP.

He is open to more funding for Openmind and said that if donors are uncomfortable with Huaweis involvement they can also support AI research through the reinforcement learning lab at the University of Alberta. Openmind is adamant that Huawei cannot influence the non-profits research, he added, and said he would decline further funding if the company attempted to do so.

I see this as a purely positive and mutually beneficial way for Huawei and academic researchers to interact, he said. It may not last, but while it does, it is entirely a good thing.

Sam Altman was briefly ousted as the CEO of OpenAI, the result of infighting on the companys board of directors. For how long did the attempted corporate coup last?

Take our news quiz to find out.

View post:

Top AI researcher launches new Alberta lab with Huawei funds after ... - The Globe and Mail