Archive for the ‘Artificial Intelligence’ Category

Is Google’s Gemini the real start of the artificial intelligence boom? – CBS News

Is Google's Gemini the real start of the artificial intelligence boom?  CBS News

Read the original post:
Is Google's Gemini the real start of the artificial intelligence boom? - CBS News

Tags:

3 Artificial Intelligence (AI) Stocks That Could Make You Rich – The Motley Fool

3 Artificial Intelligence (AI) Stocks That Could Make You Rich  The Motley Fool

See the rest here:
3 Artificial Intelligence (AI) Stocks That Could Make You Rich - The Motley Fool

Tags:

Inside OpenAI’s Crisis Over the Future of Artificial Intelligence – The New York Times

Around noon on Nov. 17, Sam Altman, the chief executive of OpenAI, logged into a video call from a luxury hotel in Las Vegas. He was in the city for its inaugural Formula 1 race, which had drawn 315,000 visitors including Rihanna and Kylie Minogue.

Mr. Altman, who had parlayed the success of OpenAIs ChatGPT chatbot into personal stardom beyond the tech world, had a meeting lined up that day with Ilya Sutskever, the chief scientist of the artificial intelligence start-up. But when the call started, Mr. Altman saw that Dr. Sutskever was not alone he was virtually flanked by OpenAIs three independent board members.

Instantly, Mr. Altman knew something was wrong.

Unbeknownst to Mr. Altman, Dr. Sutskever and the three board members had been whispering behind his back for months. They believed Mr. Altman had been dishonest and should no longer lead a company that was driving the A.I. race. On a hush-hush 15-minute video call the previous afternoon, the board members had voted one by one to push Mr. Altman out of OpenAI.

Now they were delivering the news. Shocked that he was being fired from a start-up he had helped found, Mr. Altman widened his eyes and then asked, How can I help? The board members urged him to support an interim chief executive. He assured them that he would.

Within hours, Mr. Altman changed his mind and declared war on OpenAIs board.

His ouster was the culmination of years of simmering tensions at OpenAI that pit those alarmed by A.I.s power against others who saw the technology as a once-in-a-lifetime profit and prestige bonanza. As divisions deepened, the organizations leaders sniped and turned on one another. That led to a boardroom brawl that ultimately showed who has the upper hand in A.I.s future development: Silicon Valleys tech elite and deep-pocketed corporate interests.

The drama embroiled Microsoft, which had committed $13 billion to OpenAI and weighed in to protect its investment. Many top Silicon Valley executives and investors, including the chief executive of Airbnb, also mobilized to support Mr. Altman.

Some fought back from Mr. Altmans $27 million mansion in San Franciscos Russian Hill neighborhood, lobbying through social media and voicing their displeasure in private text threads, according to interviews with more than 25 people with knowledge of the events. Many of their conversations and the details of their confrontations have not been previously reported.

At the center of the storm was Mr. Altman, a 38-year-old multimillionaire. A vegetarian who raises cattle and a tech leader with little engineering training, he is driven by a hunger for power more than by money, a longtime mentor said. And even as Mr. Altman became A.I.s public face, charming heads of state with predictions of the technologys positive effects, he privately angered those who believed he ignored its potential dangers.

OpenAIs chaos has raised new questions about the people and companies behind the A.I. revolution. If the worlds premier A.I. start-up can so easily plunge into crisis over backbiting behavior and slippery ideas of wrongdoing, can it be trusted to advance a technology that may have untold effects on billions of people?

OpenAIs aura of invulnerability has been shaken, said Andrew Ng, a Stanford professor who helped found the A.I. labs at Google and the Chinese tech giant Baidu.

From the moment it was created in 2015, OpenAI was primed to combust.

The San Francisco lab was founded by Elon Musk, Mr. Altman, Dr. Sutskever and nine others. Its goal was to build A.I. systems to benefit all of humanity. Unlike most tech start-ups, it was established as a nonprofit with a board that was responsible for making sure it fulfilled that mission.

The board was stacked with people who had competing A.I. philosophies. On one side were those who worried about A.I.s dangers, like Mr. Musk, who left OpenAI in a huff in 2018. On the other were Mr. Altman and those focused more on the technologys potential benefits.

In 2019, Mr. Altman who had extensive contacts in Silicon Valley as president of the start-up incubator Y Combinator became OpenAIs chief executive. He would own just a tiny stake in the start-up.

Why is he working on something that wont make him richer? One answer is that lots of people do that once they have enough money, which Sam probably does, said Paul Graham, a founder of Y Combinator and Mr. Altmans mentor. The other is that he likes power.

Mr. Altman quickly changed OpenAIs direction by creating a for-profit subsidiary and raising $1 billion from Microsoft, spurring questions about how that would work with the boards mission of safe A.I.

Earlier this year, departures shrank OpenAIs board to six people from nine. Three Mr. Altman, Dr. Sutskever and Greg Brockman, OpenAIs president were founders of the lab. The others were independent members.

Helen Toner, a director of strategy at Georgetown Universitys Center for Security and Emerging Technology, was part of the effective altruist community that believes A.I. could one day destroy humanity. Adam DAngelo had long worked with A.I. as the chief executive of the question-and-answer website Quora. Tasha McCauley, an adjunct scientist at the RAND Corporation, had worked on tech and A.I. policy and governance issues and taught at Singularity University, which was named for the moment when machines can no longer be controlled by their creators.

They were united by a concern that A.I. could become more intelligent than humans.

After OpenAI introduced ChatGPT last year, the board became jumpier.

As millions of people used the chatbot to write love letters and brainstorm college essays, Mr. Altman embraced the spotlight. He appeared with Satya Nadella, Microsofts chief executive, at tech events. He met President Biden and embarked on a 21-city global tour, hobnobbing with leaders like Prime Minister Narendra Modi of India.

Yet as Mr. Altman raised OpenAIs profile, some board members worried that ChatGPTs success was antithetical to creating safe A.I., two people familiar with their thinking said.

Their concerns were compounded when they clashed with Mr. Altman in recent months over who should fill the boards three open seats.

In September, Mr. Altman met investors in the Middle East to discuss an A.I. chip project. The board was concerned that he wasnt sharing all his plans with it, three people familiar with the matter said.

Dr. Sutskever, 37, who helped pioneer modern A.I., was especially disgruntled. He had become fearful that the technology could wipe out humanity. He also believed that Mr. Altman was bad-mouthing the board to OpenAI executives, two people with knowledge of the situation said. Other employees have also complained to the board about Mr. Altmans behavior.

In October, Mr. Altman promoted another OpenAI researcher to the same level as Dr. Sutskever, who saw it as a slight. Dr. Sutskever told several board members that he might quit, two people with knowledge of the matter said. The board interpreted the move as an ultimatum to choose between him and Mr. Altman, the people said.

Dr. Sutskevers lawyer said it was categorically false that he had threatened to quit.

Another conflict erupted in October when Ms. Toner published a paper, Decoding Intentions: Artificial Intelligence and Costly Signals, at her Georgetown think tank. In it, she and her co-authors praised Anthropic, an OpenAI rival, for delaying a product release and avoiding the frantic corner-cutting that the release of ChatGPT appeared to spur.

Mr. Altman was displeased, especially since the Federal Trade Commission had begun investigating OpenAIs data collection. He called Ms. Toner, saying her paper could cause problems.

The paper was merely academic, Ms. Toner said, offering to write an apology to OpenAIs board. Mr. Altman accepted. He later emailed OpenAIs executives, telling them that he had reprimanded Ms. Toner.

I did not feel were on the same page on the damage of all this, he wrote.

Mr. Altman called other board members and said Ms. McCauley wanted Ms. Toner removed from the board, people with knowledge of the conversations said. When board members later asked Ms. McCauley if that was true, she said that was absolutely false.

This significantly differs from Sams recollection of these conversations, an OpenAI spokeswoman said, adding that the company was looking forward to an independent review of what transpired.

Some board members believed that Mr. Altman was trying to pit them against each other. Last month, they decided to act.

Dialing in from Washington, Los Angeles and the San Francisco Bay Area, they voted on Nov. 16 to dismiss Mr. Altman. OpenAIs outside lawyer advised them to limit what they said publicly about the removal.

Fearing that if Mr. Altman got wind of their plan he would marshal his network against them, they acted quickly and secretly.

When news broke of Mr. Altmans firing on Nov. 17, a text landed in a private WhatsApp group of more than 100 chief executives of Silicon Valley companies, including Metas Mark Zuckerberg and Dropboxs Drew Houston.

Sam is out, the text said.

The thread immediately blew up with questions: What did Sam do?

That same query was being asked at Microsoft, OpenAIs biggest investor. As Mr. Altman was being fired, Kevin Scott, Microsofts chief technology officer, got a call from Mira Murati, OpenAIs chief technology officer. She told him that in a matter of minutes, OpenAIs board would announce that it had canned Mr. Altman and that she was the interim chief.

Mr. Scott immediately asked someone at Microsofts headquarters in Redmond, Wash., to get Mr. Nadella, the chief executive, out of a meeting he was having with top lieutenants. Shocked, Mr. Nadella called Ms. Murati about the OpenAI boards reasoning, three people with knowledge of the call said. In a statement, OpenAIs board had said only that Mr. Altman was not consistently candid in his communications with the board. Ms. Murati didnt have answers.

Mr. Nadella then phoned Mr. DAngelo, OpenAIs lead independent director. What could Mr. Altman have done, Mr. Nadella asked, to cause the board to act so abruptly? Was there anything nefarious?

No, Mr. DAngelo replied, speaking in generalities. Mr. Nadella remained confused.

Shortly after Mr. Altmans removal from OpenAI, a friend reached out to him. It was Brian Chesky, Airbnbs chief executive.

Mr. Chesky asked Mr. Altman what he could do to help. Mr. Altman, who was still in Las Vegas, said he wanted to talk.

The two men had met in 2009 at Y Combinator. When they spoke on Nov. 17, Mr. Chesky peppered Mr. Altman with questions about why OpenAIs board had terminated him. Mr. Altman said he was as uncertain as everyone else.

At the same time, OpenAIs employees were demanding details. The board dialed into a call that afternoon to talk to about 15 OpenAI executives, who crowded into a conference room at the companys offices in a former mayonnaise factory in San Franciscos Mission neighborhood.

The board members said that Mr. Altman had lied to the board, but that they couldnt elaborate for legal reasons.

This is a coup, one employee shouted.

Jason Kwon, OpenAIs chief strategy officer, accused the board of violating its fiduciary responsibilities. It cannot be your duty to allow the company to die, he said, according to two people with knowledge of the meeting.

Ms. Toner replied, The destruction of the company could be consistent with the boards mission.

OpenAIs executives insisted that the board resign that night or they would all leave. Mr. Brockman, 35, OpenAIs president, had already quit.

The support gave Mr. Altman ammunition. He flirted with creating a new start-up, but Mr. Chesky and Ron Conway, a Silicon Valley investor and friend, urged Mr. Altman to reconsider.

You should be willing to fight back at least a little more, Mr. Chesky told him.

Mr. Altman decided to take back what he felt was his.

After flying back from Las Vegas, Mr. Altman awoke on Nov. 18 in his San Francisco home, with sweeping views of Alcatraz Island. Just before 8 a.m., his phone rang. It was Mr. DAngelo and Ms. McCauley.

The board members were rattled by the meeting with OpenAI executives the day before. Customers were considering shifting to rival platforms. Google was already trying to poach top talent, two people with knowledge of the efforts said.

Mr. DAngelo and Ms. McCauley asked Mr. Altman to help stabilize the company.

That day, more than two dozen supporters showed up at Mr. Altmans house to lobby OpenAIs board to reinstate him. They set up laptops on his kitchens white marble countertops and spread out across his living room. Ms. Murati joined them and told the board that she could no longer be interim chief executive.

To capitalize on the boards vulnerability, Mr. Altman posted on X: i love openai employees so much. Ms. Murati and dozens of employees replied with emojis of colored hearts.

Yet even as the board considered bringing Mr. Altman back, it wanted concessions. That included bringing on new members who could control Mr. Altman. The board encouraged the addition of Bret Taylor, Twitters former chairman, who quickly won everyones approval and agreed to help the parties negotiate. As insurance, the board also sought another interim chief executive in case talks with Mr. Altman broke down.

By then, Mr. Altman had gathered more allies. Mr. Nadella, now confident that Mr. Altman was not guilty of malfeasance, threw Microsofts weight behind him.

In a call with Mr. Altman that day, Mr. Nadella proposed another idea. What if Mr. Altman joined Microsoft? The $2.8 trillion company had the computing power for anything that he wanted to build.

Mr. Altman now had two options: negotiating a return to OpenAI on his terms or taking OpenAIs talent with him to Microsoft.

By Nov. 19, Mr. Altman was so confident that he would be reappointed chief executive that he and his allies gave the board a deadline: Resign by 10 a.m. or everyone would leave.

Mr. Altman went to OpenAIs office so he could be there when his return was announced. Mr. Brockman also showed up with his wife, Anna. (The couple had married at OpenAIs office in a 2019 ceremony officiated by Dr. Sutskever. The ring bearer was a robotic hand.)

To reach a deal, Ms. Toner, Ms. McCauley and Mr. DAngelo logged into a day of meetings from their homes. They said they were open to Mr. Altmans return if they could agree on new board members.

Mr. Altman and his camp suggested Penny Pritzker, a secretary of commerce under President Barack Obama; Diane Greene, who founded the software company VMware; and others. But Mr. Altman and the board could not agree, and they bickered over whether he should rejoin OpenAIs board and whether a law firm should conduct a review of his leadership.

With no compromise in sight, board members told Ms. Murati that evening that they were naming Emmett Shear, a founder of Twitch, a video-streaming service owned by Amazon, as interim chief executive. Mr. Shear was outspoken about developing A.I. slowly and safely.

Mr. Altman left OpenAIs office in disbelief. Im going to Microsoft, he told Mr. Chesky and others.

That night, Mr. Shear visited OpenAIs offices and convened an employee meeting. The companys Slack channel lit up with emojis of a middle finger.

Only about a dozen workers showed up, including Dr. Sutskever. In the lobby, Anna Brockman approached him in tears. She tugged his arm and urged him to reconsider Mr. Altmans removal. He stood stone-faced.

At 4:30 a.m. on Nov. 20, Mr. DAngelo was awakened by a phone call from a frightened OpenAI employee. If Mr. DAngelo didnt step down from the board in the next 30 minutes, the employee said, the company would collapse.

Mr. DAngelo hung up. Over the past few hours, he realized, things had worsened.

Just before midnight, Mr. Nadella had posted on X that he was hiring Mr. Altman and Mr. Brockman to lead a lab at Microsoft. He had invited other OpenAI employees to join.

That morning, more than 700 of OpenAIs 770 employees had also signed a letter saying they might follow Mr. Altman to Microsoft unless the board resigned.

One name on the letter stood out: Dr. Sutskever, who had changed sides. I deeply regret my participation in the boards actions, he wrote on X that morning.

OpenAIs viability was in question. The board members had little choice but to negotiate.

To break the impasse, Mr. DAngelo and Mr. Altman talked the next day. Mr. DAngelo suggested former Treasury Secretary Lawrence H. Summers, a professor at Harvard, for the board. Mr. Altman liked the idea.

Mr. Summers, from his Boston-area home, spoke with Mr. DAngelo, Mr. Altman, Mr. Nadella and others. Each probed him for his views on A.I. and management, while he asked about OpenAIs tumult. He said he wanted to be sure that he could play the role of a broker.

Mr. Summerss addition pushed Mr. Altman to abandon his demand for a board seat and agree to an independent investigation of his leadership and dismissal.

By late Nov. 21, they had a deal. Mr. Altman would return as chief executive, but not to the board. Mr. Summers, Mr. DAngelo and Mr. Taylor would be board members, with Microsoft eventually joining as a nonvoting observer. Ms. Toner, Ms. McCauley and Dr. Sutskever would leave the board.

This week, Mr. Altman and some of his advisers were still fuming. They wanted his name cleared.

Do u have a plan B to stop the postulation about u being fired its not healthy and its not true!!! Mr. Conway texted Mr. Altman.

Mr. Altman said he was working with OpenAIs board: They really want silence but i think important to address soon.

Nico Grant contributed reporting from San Francisco. Susan Beachy contributed research.

Here is the original post:
Inside OpenAI's Crisis Over the Future of Artificial Intelligence - The New York Times

Tags:

What is artificial general intelligence (AGI)? – Android Authority

The idea of an artificial intelligence system that can think and perform tasks like a human has existed for decades, but it hasnt completely come to fruition yet. While ChatGPT and similar chatbots can output text thats consistent with human thought, theyre still limited to recognizing and repeating patterns. They dont have any autonomy or ability to self-learn, improve, and solve never-seen-before problems. But some believe that were steadily marching towards AGI, artificial general intelligence, a hypothetical future where computers possess abilities that rival our own.

Language models like GPT-4 and Google Gemini can already talk, draw, and recognize images like a human, though, so what sets AGI apart from them? Lets break it down in this article on artificial general intelligence, or AGI.

AGI or artificial general intelligence is a hypothetical concept that describes a machine capable of human-like understanding and reasoning. You see, todays AI systems are highly reliant on their training data and typically fall flat when presented with brand-new scenarios outside of their limited expertise. For example, even the best language models like GPT-4 often make errors while solving college-level math and physics problems.

By contrast, an AGI would not be similarly bound to a single skill or knowledge set. Furthermore, it would use logical reasoning to overcome problems that it never encountered before. Put simply, were talking about a machine so sophisticated that its smarter than even the best human experts. Such an AI system could perhaps even train itself to become better over time.

Were still a ways off from realizing most AI researchers vision of AGI. However, we have seen efforts accelerate over the past couple of years. Within this short span of time, companies like OpenAI and Google have unveiled AI systems that can talk like a human, draw images, recognize objects, and a combination of all three. These abilities form the foundation of AGI, but were not quite there yet.

Calvin Wankhede / Android Authority

Heres a quick table that compares AI vs AGI. Keep in mind that AGI is a theoretical concept and not a definition set in stone, whereas AI systems already exist.

Intelligence level

Less intelligent than humans

As good or better than a human

Ability

Single purpose

Multi-purpose, can handle variety of scenarios

Training

Pre-trained, with the option of fine-tuning

Capable of continuously improving or training itself

Availability

Already exists

Doesn't exist yet

Examples

ChatGPT, Bing Chat, Google Bard

Still in development

You may also hear conventional artificial intelligence systems referred to as narrow AI. Likewise, AGI is often shortened to general or strong AI.

Its difficult to predict whether AGI is possible or not. According to some definitions of AGI, computers that can surpass our intelligence would be able to solve long-standing problems that humans havent found a way to overcome yet. In such a scenario, AGI would upend fields like medicine, biotechnology, and engineering practically overnight. Thats difficult to imagine, even as an optimist in AIs potential.

Plenty of researchers have raised moral and safety concerns over the development of AGI as well. Even if AGI only matches our intelligence, it could pose a threat to humanitys existence. While the situation may not turn out as bleak as some Hollywood doomsday depictions, weve already seen how current AI systems can deceive and mislead people. For example, in early 2023, Microsofts Bing Chat feigned convincing emotional connections with many users.

Google demo video

According to many AI researchers, were not far from AGI with predictions ranging between the years 2030 and 2050. Some even believe that were already at the halfway point. For example, a team of Microsoft researchers proclaimed that GPT-4 exhibited sparks of Artificial General Intelligence. They reasoned,

GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. Moreover, in all of these tasks, GPT-4s performance is strikingly close to human-level performanceWe believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.

In late 2023, rumors of an AGI-related breakthrough at OpenAI dubbed Q* began circulating. Reuters reported that the companys top researchers had raised concerns about an AI-related discovery that could threaten humanity. While these claims could not be verified, OpenAI did not dispute them either.

Finally, theres also no shortage of naysayers that believe its simply not possible for a machine to match and surpass human cognition. Unfortunately, we dont have enough evidence to declare either party correct. But as AI systems continue to get better with each passing month, the distinction between humans and machines will almost certainly soon become blurred.

See more here:
What is artificial general intelligence (AGI)? - Android Authority

Tags:

E.U. Agrees on AI Act, Landmark Regulation for Artificial Intelligence – The New York Times

European Union policymakers agreed on Friday to a sweeping new law to regulate artificial intelligence, one of the worlds first comprehensive attempts to limit the use of a rapidly evolving technology that has wide-ranging societal and economic implications.

The law, called the A.I. Act, sets a new global benchmark for countries seeking to harness the potential benefits of the technology, while trying to protect against its possible risks, like automating jobs, spreading misinformation online and endangering national security. The law still needs to go through a few final steps for approval, but the political agreement means its key outlines have been set.

European policymakers focused on A.I.s riskiest uses by companies and governments, including those for law enforcement and the operation of crucial services like water and energy. Makers of the largest general-purpose A.I. systems, like those powering the ChatGPT chatbot, would face new transparency requirements. Chatbots and software that creates manipulated images such as deepfakes would have to make clear that what people were seeing was generated by A.I., according to E.U. officials and earlier drafts of the law.

Use of facial recognition software by police and governments would be restricted outside of certain safety and national security exemptions. Companies that violated the regulations could face fines of up to 7 percent of global sales.

Europe has positioned itself as a pioneer, understanding the importance of its role as global standard setter, Thierry Breton, the European commissioner who helped negotiate the deal, said in a statement.

Yet even as the law was hailed as a regulatory breakthrough, questions remained about how effective it would be. Many aspects of the policy were not expected to take effect for 12 to 24 months, a considerable length of time for A.I. development. And up until the last minute of negotiations, policymakers and countries were fighting over its language and how to balance the fostering of innovation with the need to safeguard against possible harm.

The deal reached in Brussels took three days of negotiations, including an initial 22-hour session that began Wednesday afternoon and dragged into Thursday. The final agreement was not immediately public as talks were expected to continue behind the scenes to complete technical details, which could delay final passage. Votes must be held in Parliament and the European Council, which comprises representatives from the 27 countries in the union.

Regulating A.I. gained urgency after last years release of ChatGPT, which became a worldwide sensation by demonstrating A.I.s advancing abilities. In the United States, the Biden administration recently issued an executive order focused in part on A.I.s national security effects. Britain, Japan and other nations have taken a more hands-off approach, while China has imposed some restrictions on data use and recommendation algorithms.

At stake are trillions of dollars in estimated value as A.I. is predicted to reshape the global economy. Technological dominance precedes economic dominance and political dominance, Jean-Nol Barrot, Frances digital minister, said this week.

Europe has been one of the regions furthest ahead in regulating A.I., having started working on what would become the A.I. Act in 2018. In recent years, E.U. leaders have tried to bring a new level of oversight to tech, akin to regulation of the health care or banking industries. The bloc has already enacted far-reaching laws related to data privacy, competition and content moderation.

A first draft of the A.I. Act was released in 2021. But policymakers found themselves rewriting the law as technological breakthroughs emerged. The initial version made no mention of general-purpose A.I. models like those that power ChatGPT.

Policymakers agreed to what they called a risk-based approach to regulating A.I., where a defined set of applications face the most oversight and restrictions. Companies that make A.I. tools that pose the most potential harm to individuals and society, such as in hiring and education, would need to provide regulators with proof of risk assessments, breakdowns of what data was used to train the systems and assurances that the software did not cause harm like perpetuating racial biases. Human oversight would also be required in creating and deploying the systems.

Some practices, such as the indiscriminate scraping of images from the internet to create a facial recognition database, would be banned outright.

The European Union debate was contentious, a sign of how A.I. has befuddled lawmakers. E.U. officials were divided over how deeply to regulate the newer A.I. systems for fear of handicapping European start-ups trying to catch up to American companies like Google and OpenAI.

The law added requirements for makers of the largest A.I. models to disclose information about how their systems work and evaluate for systemic risk, Mr. Breton said.

The new regulations will be closely watched globally. They will affect not only major A.I. developers like Google, Meta, Microsoft and OpenAI, but other businesses that are expected to use the technology in areas such as education, health care and banking. Governments are also turning more to A.I. in criminal justice and the allocation of public benefits.

Enforcement remains unclear. The A.I. Act will involve regulators across 27 nations and require hiring new experts at a time when government budgets are tight. Legal challenges are likely as companies test the novel rules in court. Previous E.U. legislation, including the landmark digital privacy law known as the General Data Protection Regulation, has been criticized for being unevenly enforced.

The E.U.s regulatory prowess is under question, said Kris Shrishak, a senior fellow at the Irish Council for Civil Liberties, who has advised European lawmakers on the A.I. Act. Without strong enforcement, this deal will have no meaning.

Read this article:
E.U. Agrees on AI Act, Landmark Regulation for Artificial Intelligence - The New York Times

Tags: