Archive for the ‘Artificial General Intelligence’ Category

Apple’s big AI announcements were all about AI ‘for the rest of us’Google, Meta, Amazon and, yes, OpenAI should … – Fortune

In the end, Apples highly anticipated AI announcements were very, well, Apple-y. You could practically feel that bite in the tech giants fruit logo as the company finally announced Apple Intelligence (how deliciously on-brand to take advantage of the technologys initials), which Apples Tim Cook touted will be personal, powerful, and private and integrated across Apples app and hardware ecosystem.

Apple, of course, has always been all about being a protective walled garden that provides comprehensive security measures but also plenty of restrictions for users, and Apple Intelligence will be no different. But it is that very personal context of the user within the Apple landscape, combined with the power of generative AI, that makes Apple Intelligence something perhaps only Apple could really do.

Apple has not been first, or anywhere near the cutting edge of generative AI, but it is betting on something else: an AI for the rest of usfor the billions of users who dont care about models or APIs or datasets or GPUs or devices or the potential for artificial general intelligence (AGI). That is, the normiesas those in the tech industry like to call themwho simply want AI that is easy, useful, protective of privacy, and just works.

The laundry list of features Apple executives promised to roll out across iPhone, iPad, and Mac OS devices was long. Siri is getting an upgrade that makes the assistant natural, more contextually relevant, and more personal. If Siri cant answer a question itself, it will ask the user if its okay to tap into ChatGPTthanks to a new deal between Apple and OpenAIand it will have on-screen awareness that will eventually allow Siri to take more agent-like action on user content across apps.

There will be new systemwide writing tools in iOS 18, iPadOS 18, and macOS Sequoia, as well as new ways for AI to help prioritize everything from messages to notifications. The fun factor is well-represented as well, with on-device AI image creation and the fittingly named Genmojis, which let users create custom emojis on the fly (think a smiley face with cucumbers on the eyes to indicate youre at the spa).

But unlike Google and Metas throw-everything-at-the-wall approach to integrating generative AI into their products, Apple is taking a different tack, putting a carefully designed layer of gen AI on top of its operating system. None of it, at least in Mondays demo, seems bolted on as an afterthought (like Metas AI-is-everywhere search bar in Instagram, Facebook, and WhatsApp, for example). And none of it, in fact, really uses the word AI as in artificial intelligence.

The rebranding of AI as Apple Intelligence takes a technology consumers have heard and read about for more than a year (and which has often sounded frightening, futuristic, and kind of freaky), and serves it up as something thats soothingly safe and secure. Its the tech equivalent of a mild soap for sensitive skin, offering consumers a freshly scrubbed face with no hard-to-pronounce and potentially irritating ingredients.

Of course, Big Tech demos are notorious for big announcements that dont always deliver. And there were few details about important issues like the provenance of the data powering Apple Intelligence features, the terms of Apples deal with OpenAI for access to ChatGPT, and how Apple plans to deal with the inevitable hallucinations that will result from its AI output. After all, safe and secure does not necessarily mean accurate. When Apple Intelligence is released in the wild, so to speak, things are sure to get interesting, and messier.

The tech world is in a fierce battle to see which company will be able to take AI and turn it into the industrys next game-changer. Whether that is Apple or not remains to be seen, but the elegant simplicity of the Apple Intelligence announcements certainly puts Google, Meta, Amazon, and, yes, OpenAI on notice: AI may be complicated, but as Steve Jobs said, simple can be harder than complex: You have to work hard to get your thinking clean to make it simple. Perhaps, AI companies will finally figure out how to keep it simpleand, as Jobs said, move mountains.

View post:

Apple's big AI announcements were all about AI 'for the rest of us'Google, Meta, Amazon and, yes, OpenAI should ... - Fortune

Elon Musk Withdraws His Lawsuit Against OpenAI and Sam Altman – The New York Times

Elon Musk withdrew his lawsuit on Tuesday against OpenAI, the maker of the online chatbot ChatGPT, a day before a state judge in San Francisco was set to consider whether it should be dismissed.

The suit, filed in February, had accused the artificial intelligence start-up and two of its founders, Sam Altman and Greg Brockman, of breaching OpenAIs founding contract by prioritizing commercial interests over the public good.

A multibillion-dollar partnership that OpenAI signed with Microsoft, Mr. Musks suit claimed, represented an abandonment of the companys pledge to carefully develop A.I. and make the technology publicly available.

Mr. Musk had argued that the founding contract said that the organization should instead be focused on building artificial general intelligence, or A.G.I., a machine that can do anything the brain can do, for the benefit of humanity.

OpenAI, based in San Francisco, had called for a dismissal days after Mr. Musk filed the suit. He could still refile the suit in California or another state.

Mr. Musk did not immediately respond to a request for comment, and OpenAI declined to comment.

Mr. Musk helped found OpenAI in 2015 along with Mr. Altman, Mr. Brockman and several young A.I. researchers. He saw the research lab as a response to A.I. work being done at the time by Google. Mr. Musk believed Google and its co-founder, Larry Page, were not appropriately concerned with the risks that A.I. presented to humanity.

Mr. Musk parted ways with OpenAI after a power struggle in 2018. The company later become an A.I. technology leader, creating ChatGPT, a chatbot that can generate text and answer questions in humanlike prose.

Mr. Musk founded his own A.I. company last year called xAI, while repeatedly claiming that OpenAI was not focused enough on the dangers of the technology.

He filed his lawsuit weeks after members of the OpenAI board unexpectedly fired Mr. Altman, saying he could no longer be trusted with the companys mission to build A.I. for the good of humanity. Mr. Altman was reinstated after five days of negotiations with the board, and soon cemented his control over the company, reclaiming a seat on the board.

Late last month, OpenAI announced that it had started working on a new artificial intelligence model that would succeed the GPT-4 technology that drives ChatGPT. The company said that it expected the new model to bring the next level of capabilities as it strove to build A.G.I.

The company also said it was creating a new Safety and Security Committee to explore how it should handle the risks posed by the new model and future technologies.

Read more:

Elon Musk Withdraws His Lawsuit Against OpenAI and Sam Altman - The New York Times

Staying Ahead of the AI Train – ATD

EXL: 2024 BEST Award Winner, #19

Advertisement

If you want to envision life in the fastest of lanes, imagine sitting in the driver's seat of Sanjay Dutt, global head of learning and capability development for EXL. You may soon be following his lead.

A major provider of data and artificial-intelligence-led digital operations, solutions, and analytics services, EXL is modernizing its portfolio to embrace the latest advances in data, domain excellence, and AI solutions. In the process, it has gained global expertise in technology's most dynamic trends, including generative AI, cloud technology, data, and analytics.

It is Dutt's job to make the necessary talent transformation succeed by upskilling the company's workforce and radically changing the culture of more than 54,000 employeesall while staying ahead of technological whirlwinds. From his base in Dublin, Ireland, Dutt heads a team of 85 capability development and HR professionals who are engaged in EXL offices around the world.

He says success in his role requires an unwavering commitment to building a workforce that is not only well prepared for AI and the digital age, but one that can also drive innovation, customer experience, and productivity and efficiency gains.

The initiative includes training programs on cutting-edge technologies to align employee capabilities with emerging industry demands. Through a comprehensive upskilling and reskilling initiative, Dutt's team has created a culture of continuous learning.

Achievements to date include higher value delivery to clients complemented by high levels of client satisfaction. "Our strategic talent development efforts not only addressed skills gaps but transformed it into a catalyst for growth and excellence," Dutt shares.

The campaign equipped more than 7,000 digital practitioners with skills and knowledge needed to excel within the burgeoning landscape. In specific domains, such as AI, cloud, data management, machine learning, and computer vision, the learning team developed more than 650 digital experts who are now industry leaders in their respective areas, Dutt says.

EXL rolled out its Future Ready Talent Strategy in 2021, as AI and generative AI began making waves in the marketplace. The company evaluated the current capabilities of the EXL workforce, focusing on where their professions may end up within four to five years. The learning team consulted the field's leading experts and top business school experts for insight. The resulting feedback, Dutt states, "created huge excitement within the company."

Within an accelerated span of time, the team upskilled EXL to be ready for the generative AI crazea considerable feat, says Dutt, for an enterprise of its size.

A critical focus that emerged was to drive practitioners' capabilities around identifying specific use cases for enterprise transformation and orchestrating it end to end from strategy and design to deployment, change management, and results. In response, Dutt's team approached leading data and AI experts and business schools for their insights regarding AI's future impacts on client business.

Senior leaders are the driving force behind the employees' empowerment, Dutt notes. They furnish essential resources from cutting-edge technologies to a rich array of learning channels, including online courses and interactive workshops, ensuring that individuals acquire new capabilities effectively.

"Talent development is front and center of any strategic conversation among our company's leaders," Dutt states.

"We rely on people who can listen, who are comfortable with AI, who know how to use data, and are not just managing services," he says. "When that happens, you change the culture of the company."

Dutt's advice for TD professionals everywhere is to prepare for the emerging trends within their own organizations. "Since technology keeps changing, one of the biggest challenges faced by companies is the difficulty of upskilling their people at scale without fully employing learning technologies to their fullest," he warns.

Dutt also urges TD professionals to climb aboard the AI train if they haven't already done so. "Gen AI and [large language models] are making AI accessible for widespread adoption," he stresses. "AI will soon start shaping strategy, operations, and people's livesa profound change." Within EXL, Dutt and other senior leaders lead by example by conducting their own research and meeting personally with AI pioneers.

The business world has not fully realized just how profound that change will be. "There's a lot in this emerging field that's not generally known, from the intricacies of 'narrow AI' to the complexities of general intelligence," he states. At a minimum, he cautions that TD practitioners will be preparing employees to transition into higher-value-added roles as AI assumes the mantle of repetitive tasks.

View theentire listof 2024 BEST Award winners.

Read the original:

Staying Ahead of the AI Train - ATD

BEYOND LOCAL: ‘Noise’ in the machine: Human differences in judgment lead to problems for AI – The Longmont Leader

Many people understand the concept of bias at some intuitive level. In society, and in artificial intelligence systems, racial and gender biases are well documented.

If society could somehow remove bias, would all problems go away? The late Nobel laureate Daniel Kahneman, who was a key figure in the field of behavioral economics, argued in his last book that bias is just one side of the coin. Errors in judgments can be attributed to two sources: bias and noise.

Bias and noise both play important roles in fields such as law, medicine and financial forecasting, where human judgments are central. In our work as computer and information scientists, my colleagues and I have found that noise also plays a role in AI.

Statistical noise

Noise in this context means variation in how people make judgments of the same problem or situation. The problem of noise is more pervasive than initially meets the eye. A seminal work, dating back all the way to the Great Depression, has found that different judges gave different sentences for similar cases.

Worryingly, sentencing in court cases can depend on things such as the temperature and whether the local football team won. Such factors, at least in part, contribute to the perception that the justice system is not just biased but also arbitrary at times.

Other examples: Insurance adjusters might give different estimates for similar claims, reflecting noise in their judgments. Noise is likely present in all manner of contests, ranging from wine tastings to local beauty pageants to college admissions.

Noise in the data

On the surface, it doesnt seem likely that noise could affect the performance of AI systems. After all, machines arent affected by weather or football teams, so why would they make judgments that vary with circumstance? On the other hand, researchers know that bias affects AI, because it is reflected in the data that the AI is trained on.

For the new spate of AI models like ChatGPT, the gold standard is human performance on general intelligence problems such as common sense. ChatGPT and its peers are measured against human-labeled commonsense datasets.

Put simply, researchers and developers can ask the machine a commonsense question and compare it with human answers: If I place a heavy rock on a paper table, will it collapse? Yes or No. If there is high agreement between the two in the best case, perfect agreement the machine is approaching human-level common sense, according to the test.

So where would noise come in? The commonsense question above seems simple, and most humans would likely agree on its answer, but there are many questions where there is more disagreement or uncertainty: Is the following sentence plausible or implausible? My dog plays volleyball. In other words, there is potential for noise. It is not surprising that interesting commonsense questions would have some noise.

But the issue is that most AI tests dont account for this noise in experiments. Intuitively, questions generating human answers that tend to agree with one another should be weighted higher than if the answers diverge in other words, where there is noise. Researchers still dont know whether or how to weigh AIs answers in that situation, but a first step is acknowledging that the problem exists.

Tracking down noise in the machine

Theory aside, the question still remains whether all of the above is hypothetical or if in real tests of common sense there is noise. The best way to prove or disprove the presence of noise is to take an existing test, remove the answers and get multiple people to independently label them, meaning provide answers. By measuring disagreement among humans, researchers can know just how much noise is in the test.

The details behind measuring this disagreement are complex, involving significant statistics and math. Besides, who is to say how common sense should be defined? How do you know the human judges are motivated enough to think through the question? These issues lie at the intersection of good experimental design and statistics. Robustness is key: One result, test or set of human labelers is unlikely to convince anyone. As a pragmatic matter, human labor is expensive. Perhaps for this reason, there havent been any studies of possible noise in AI tests.

To address this gap, my colleagues and I designed such a study and published our findings in Nature Scientific Reports, showing that even in the domain of common sense, noise is inevitable. Because the setting in which judgments are elicited can matter, we did two kinds of studies. One type of study involved paid workers from Amazon Mechanical Turk, while the other study involved a smaller-scale labeling exercise in two labs at the University of Southern California and the Rensselaer Polytechnic Institute.

You can think of the former as a more realistic online setting, mirroring how many AI tests are actually labeled before being released for training and evaluation. The latter is more of an extreme, guaranteeing high quality but at much smaller scales. The question we set out to answer was how inevitable is noise, and is it just a matter of quality control?

The results were sobering. In both settings, even on commonsense questions that might have been expected to elicit high even universal agreement, we found a nontrivial degree of noise. The noise was high enough that we inferred that between 4% and 10% of a systems performance could be attributed to noise.

To emphasize what this means, suppose I built an AI system that achieved 85% on a test, and you built an AI system that achieved 91%. Your system would seem to be a lot better than mine. But if there is noise in the human labels that were used to score the answers, then were not sure anymore that the 6% improvement means much. For all we know, there may be no real improvement.

On AI leaderboards, where large language models like the one that powers ChatGPT are compared, performance differences between rival systems are far narrower, typically less than 1%. As we show in the paper, ordinary statistics do not really come to the rescue for disentangling the effects of noise from those of true performance improvements.

Noise audits

What is the way forward? Returning to Kahnemans book, he proposed the concept of a noise audit for quantifying and ultimately mitigating noise as much as possible. At the very least, AI researchers need to estimate what influence noise might be having.

Auditing AI systems for bias is somewhat commonplace, so we believe that the concept of a noise audit should naturally follow. We hope that this study, as well as others like it, leads to their adoption.

Mayank Kejriwal, Research Assistant Professor of Industrial & Systems Engineering, University of Southern California

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Follow this link:

BEYOND LOCAL: 'Noise' in the machine: Human differences in judgment lead to problems for AI - The Longmont Leader

OpenAI disbands its AI risk mitigation team –

OpenAI on Friday said that it has disbanded a team devoted to mitigating the long-term dangers of super-smart artificial intelligence (AI).

It began dissolving the so-called superalignment group weeks ago, integrating members into other projects and research, the San Francisco-based firm said.

OpenAI co-founder Ilya Sutskever and team coleader Jan Leike announced their departures from the company during the week.

The dismantling of a team focused on keeping sophisticated AI under control comes as such technology faces increased scrutiny from regulators and fears mount regarding its dangers.

OpenAI must become a safety-first AGI [artificial general intelligence] company, Leike wrote on X on Friday.

Leike called on all OpenAI employees to act with the gravitas warranted by what they are building.

OpenAI CEO Sam Altman responded to Leikes post with one of his own.

Altman thanked Leike for his work at the company and said he was sad to see him leave.

Hes right, we have a lot more to do, Altman said. We are committed to doing it.

Altman promised more on the topic in the coming days.

Sutskever said on X that he was leaving after almost a decade at OpenAI, the trajectory of which has been nothing short of miraculous.

Im confident that OpenAI will build AGI that is both safe and beneficial, he added, referring to computer technology that seeks to perform as well as or better than human cognition.

Sutskever, who is also OpenAIs chief scientist, sat on the board that voted to remove Altman in November last year.

The ousting threw the company into a tumult, as staff and investors rebelled.

The OpenAI board ended up hiring Altman back a few days later.

OpenAI earlier last week released a higher-performing and even more human-like version of the AI technology that underpins ChatGPT, which was made free to all users.

It feels like AI from the movies, Altman said in a blog post.

Altman has previously pointed to Scarlett Johanssons character in the movie Her, where she voices an AI-based virtual assistant dating a man, as an inspiration for where he would like AI interactions to go.

The day would come when digital brains will become as good and even better than our own, Sutskever said at a talk during a TED AI summit in San Francisco late last year.

AGI will have a dramatic impact on every area of life, Sutskever added.

Comments will be moderated. Keep comments relevant to the article. Remarks containing abusive and obscene language, personal attacks of any kind or promotion will be removed and the user banned. Final decision will be at the discretion of the Taipei Times.

View original post here:

OpenAI disbands its AI risk mitigation team -