Archive for the ‘Ai’ Category

We have to stop ignoring AI’s hallucination problem – The Verge

Google I/O introduced an AI assistant that can see and hear the world, while OpenAI put its version of a Her-like chatbot into an iPhone. Next week, Microsoft will be hosting Build, where its sure to have some version of Copilot or Cortana that understands pivot tables. Then, a few weeks after that, Apple will host its own developer conference, and if the buzz is anything to go by, itll be talking about artificial intelligence, too. (Unclear if Siri will be mentioned.)

AI is here! Its no longer conceptual. Its taking jobs, making a few new ones, and helping millions of students avoid doing their homework. According to most of the major tech companies investing in AI, we appear to be at the start of experiencing one of those rare monumental shifts in technology. Think the Industrial Revolution or the creation of the internet or personal computer. All of Silicon Valley of Big Tech is focused on taking large language models and other forms of artificial intelligence and moving them from the laptops of researchers into the phones and computers of average people. Ideally, they will make a lot of money in the process.

But I cant really care about that because Meta AI thinks I have a beard.

I want to be very clear: I am a cis woman and do not have a beard. But if I type show me a picture of Alex Cranz into the prompt window, Meta AI inevitably returns images of very pretty dark-haired men with beards. I am only some of those things!

Meta AI isnt the only one to struggle with the minutiae of The Verges masthead. ChatGPT told me yesterday I dont work at The Verge. Googles Gemini didnt know who I was (fair), but after telling me Nilay Patel was a founder of The Verge, it then apologized and corrected itself, saying he was not. (I assure you he was.)

The AI keeps screwing up because these computers are stupid. Extraordinary in their abilities and astonishing in their dimwittedness. I cannot get excited about the next turn in the AI revolution because that turn is into a place where computers cannot consistently maintain accuracy about even minor things.

I mean, they even screwed up during Googles big AI keynote at I/O. In a commercial for Googles new AI-ified search engine, someone asked how to fix a jammed film camera, and it suggested they open the back door and gently remove the film. That is the easiest way to destroy any photos youve already taken.

An AIs difficult relationship with the truth is called hallucinating. In extremely simple terms: these machines are great at discovering patterns of information, but in their attempt to extrapolate and create, they occasionally get it wrong. They effectively hallucinate a new reality, and that new reality is often wrong. Its a tricky problem, and every single person working on AI right now is aware of it.

One Google ex-researcher claimed it could be fixed within the next year (though he lamented that outcome), and Microsoft has a tool for some of its users thats supposed to help detect them. Googles head of Search, Liz Reid, told The Verge its aware of the challenge, too. Theres a balance between creativity and factuality with any language model, she told my colleague David Pierce. Were really going to skew it toward the factuality side.

But notice how Reid said there was a balance? Thats because a lot of AI researchers dont actually think hallucinations can besolved. A study out of the National University of Singapore suggested that hallucinations are an inevitable outcome of all large language models. Just as no person is 100 percent right all the time, neither are these computers.

And thats probably why most of the major players in this field the ones with real resources and financial incentive to make us all embrace AI think you shouldnt worry about it. During Googles IO keynote, it added, in tiny gray font, the phrase check responses for accuracy to the screen below nearly every new AI tool it showed off a helpful reminder that its tools cant be trusted, but it also doesnt think its a problem. ChatGPT operates similarly. In tiny font just below the prompt window, it says, ChatGPT can make mistakes. Check important info.

Thats not a disclaimer you want to see from tools that are supposed to change our whole lives in the very near future! And the people making these tools do not seem to care too much about fixing the problem beyond a small warning.

Sam Altman, the CEO of OpenAI who was briefly ousted for prioritizing profit over safety, went a step further and said anyone who had an issue with AIs accuracy was naive. If you just do the naive thing and say, Never say anything that youre not 100 percent sure about, you can get them all to do that. But it wont have the magic that people like so much, he told a crowd at Salesforces Dreamforce conference last year.

This idea that theres a kind of unquantifiable magic sauce in AI that will allow us to forgive its tenuous relationship with reality is brought up a lot by the people eager to hand-wave away accuracy concerns. Google, OpenAI, Microsoft, and plenty of other AI developers and researchers have dismissed hallucination as a small annoyance that should be forgiven because theyre on the path to making digital beings that might make our own lives easier.

But apologies to Sam and everyone else financially incentivized to get me excited about AI. I dont come to computers for the inaccurate magic of human consciousness. I come to them because they are very accurate when humans are not. I dont need my computer to be my friend; I need it to get my gender right when I ask and help me not accidentally expose film when fixing a busted camera. Lawyers, I assume, would like it to get the case law right.

I understand where Sam Altman and other AI evangelists are coming from. There is a possibility in some far future to create a real digital consciousness from ones and zeroes. Right now, the development of artificial intelligence is moving at an astounding speed that puts many previous technological revolutions to shame. There is genuine magic at work in Silicon Valley right now.

But the AI thinks I have a beard. It cant consistently figure out the simplest tasks, and yet, its being foisted upon us with the expectation that we celebrate the incredible mediocrity of the services these AIs provide. While I can certainly marvel at the technological innovations happening, I would like my computers not to sacrifice accuracy just so I have a digital avatar to talk to. That is not a fair exchange its only an interesting one.

Follow this link:

We have to stop ignoring AI's hallucination problem - The Verge

Bye Bye, AI: How to turn off Google’s annoying AI overviews and just get search results – Tom’s Hardware

Google's "AI Overviews" feature, also known as SGE (Search Generative Experience), is a raging trash fire that threatens to choke the open web with its stench. Instead of directing you to expert insights from reputable sources, Google is now putting plagiarized and often incorrect AI summaries above its search results. So when you search for medical advice, for example, the AI may tell you to drink urine to get rid of kidney stones, and you'll have to scroll past that "advice" to find links to articles from human doctors.

Unfortunately, Google does not provide a way to turn off AI Overviews in its settings, but there are a few ways to avoid these atrocities and go straight to search results. In perhaps a tacit admission that its default results page is now a junk yard, the search giant has added a "web" tab to the site so, just like you can narrow your search to "images" or "videos" or "news," you can now get a plain old list of web pages without AI, answer boxes or other cruft.

Below, I'll show you how to filter AI overviews out of the results page using a Chrome extension that I wrote. Or you can send your searches directly to the web tab from Chrome's address bar, avoiding the need to turn anything off. Unfortunately, at the moment, neither of these methods works for Chrome on Android or iOS. However, you can use a different mobile browser, such as Firefox.

The Google AI Overview, like all parts of an HTML page, can be altered using JavaScript. There are a few extensions in the Chrome web store that are programmed to locate the AI Overview block and set its CSS display value to "none."

After seeing some of the other extensions in the market, including the appropriately-named Hide Google AI Overview, I decided to write my own AI Overview blocking extension called Bye Bye, Google AI. Like all Chrome extensions, it works in both Chrome and Microsoft Edge browsers.

Bye Bye, Google AI also has the option to hide / effectively turn off discussions blocks, shopping blocks, featured snippets, video blocks and sponsored links from the Google results page. You can choose which ones you want to filter out by going to the options menu (right clicking on the toolbar icon and selection Options).

The problem with my extension or any of the others is that Google can easily block them or break them. If Google makes small changes in the code on its results pages, the JavaScript in the extension may no longer be able to locate the AI Overview blocks (or other block types) to turn them off.

Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

A potentially more reliable solution in the long term for turning off AI overviews is to configure your browser so that, when you search from the address bar, it sends the queries straight to the web tab. The Bye Bye, Google AI extension will search the web tab if you hit w + spacebar and then your query.

However, below, we'll see how to configure the Chrome browser so that it sends all queries from the address bar directly to the web tab, no extension or hitting w + spacebar required. The disadvantages of sending traffic to the web tab is that it doesn't show other kinds of results such as videos, discussions, featured snippets, images or shopping blocks and you might want to see some or all of those.

If, like me, you initiate most of your web searches from the Chrome browser's address bar, you can make a simple change that will direct all of your queries to Google's web search tab, no extension required.

1. Navigate to chrome://settings/searchEngines in Chrome or click Settings->Search Engine->Manage search engines and site search.

2. Click the Add button next to Site search.

A dialog box appears, allowing you to create a new "site search" entry.

3. Fill in the fields in the dialog box as follows then click Add.

4. Select "Make default" from the three-dot menu next to your new entry.

The Google (Web) engine will now appear on the Search engines list. When you enter a query in the address bar, it will direct you straight to the Web tab on Google. The real secret is that the search engine we created adds the parameter ?udm=14 to the search query.

While Google Chrome for the desktop makes it easy to change your address bar search or install extensions, Chrome for the phone is a different story. On Chrome for Android and iOS, you can't use extensions at all, and you can only choose from a limited group of search engines. Yes, you can select a custom search engine, but it has to be an existing engine on the Internet you've visited; you can't manually type in a search URL and, therefore, can't add the all-important ?udm=14 to the query string.

Unfortunately, neither mobile Safari nor mobile Edge allows you to manually add a search engine. However, mobile Firefox, available for iOS and Android, does have this capability. Here's how to use it.

1. Install Firefox on your phone if you don't have it already.

2. Navigate to Settings.

3. Tap Search.

4. Tap Default Search Engine

5. Tap Add search engine.

6. Fill out the fields as follows and then click Save.

7. Select Google (Web) from the menu.

Now, when you search from Firefox's address bar, you'll get the Google web tab.

Read more here:

Bye Bye, AI: How to turn off Google's annoying AI overviews and just get search results - Tom's Hardware

Hollywood agency CAA aims to help stars manage their own AI likenesses – TechCrunch

Creative Artists Agency (CAA), one of the top entertainment and sports talent agencies, is hoping to be at the forefront of AI protection services for celebrities in Hollywood.

With many stars having their digital likeness used without permission, CAA has built a virtual media storage system for A-list talent actors, athletes, comedians, directors, musicians, and more to store their digital assets, such as their names, images, digital scans, voice recordings, and so on. The new development is a part of theCAAvault, the companys studio where actors record their bodies, faces, movements, and voices using scanning technology to create AI clones.

CAA teamed up with AI tech company Veritone to provide its digital asset management solution, the company announced earlier this week.

The announcement arrives amid a wave of AI deepfakes of celebrities, which are often created without their consent. Tom Hanks, a famous actor and client on CAAs roster, fell victim to an AI scam seven months ago. He claimed that a company used an AI-generated video of him to promote a dental plan without permission.

Over the last couple of years or so, there has been a vast misuse of our clients names, images, likenesses, and voices without consent, without credit, without proper compensation. Its very clear that the law is not currently set up to be able to protect them, and so we see many open lawsuits out there right now, Shannon said.

A significant amount of personal data is necessary to create digital clones, which raises numerous privacy concerns due to the risk of compromising or misusing sensitive information. CAA clients can now store their AI digital doubles and other assets within a secure personal hub in the CAAvault which can only be accessed by authorized users, allowing them to share and monetize their content as they see fit.

This is giving the ability to start setting precedents for what consent-based use of AI looks like, CAAs head of strategic development, Alexandra Shannon, told TechCrunch. Frankly, our view has been that the law is going to take time to catch up, and so by the talent creating and owning their digital likeness with [theCAAvault] there is now a legitimate way for companies to work with one of our clients. If a third party chooses not to work with them in the right way, its much easier for legal cases to show there was an infringement of their rights and help protect clients over time.

Notably, the vault also ensures actors and other talent are rightfully compensated when companies use their digital likenesses.

All these assets are owned by the individual client, so it is largely up to them if they want to grant access to anybody else It is also completely up to the talents to decide the right business model for opportunities. This is a new space, and it is very much forming. We believe these assets will increase in value and opportunity over time. This shouldnt be a cheaper way to work with somebody We view [AI clones] as an enhancement rather than being for cost savings, Shannon added.

CAA also represents Ariana Grande, Beyonc, Reese Witherspoon, Steven Spielberg, and Zendaya, among others.

The use of AI cloning has sparked many debates in Hollywood, with some believing it could lead to fewer job opportunities, as studios might choose digital clones over real actors. This was a major point of contention during the 2023 SAG-AFTRA strikes, which ended in November after members approved a new agreement with AMPTP (Alliance of Motion Picture and Television Producers) that recognized the importance of human performers and included guidelines on how digital replicas should be used.

There are also concerns surrounding the unauthorized use of AI clones of deceased celebrities, which can be disturbing to family members. For instance, Robin Williams daughter expressed her disdain for an AI-generated voice recording of the star. However, some argue that, when done ethically, it can be a sentimental way to preserve an iconic actor and recreate their performances in future projects for all generations to enjoy.

AI clones are an effective tool that enables legacies to live on into future generations. CAA takes a consent and permission-based approach to all AI applications and would only work with estates that own and have permissions for the use of these likeness assets. It is up to the artists as to whom they wish to grant ownership of and permission for use after their passing, Shannon noted.

Shannon declined to share which of CAAs clients are currently storing their AI clones in the vault, however, she said it was only a select few at the moment. CAA also charges a fee for clients to participate in the vault, yet didnt say exactly how much it costs.

The ultimate goal will be to make this available to all our clients and anyone in the industry. It is not inexpensive, but over time, the costs will continue to come down, she added.

Read this article:

Hollywood agency CAA aims to help stars manage their own AI likenesses - TechCrunch

‘Copper is the new oil,’ and prices could soar 50% as AI, green energy, and military spending boost demand, top … – Fortune

Copper is emerging as the next indispensable industrial commodity, mirroring oils rise in earlier decades, a top commodities analyst said.

This time around, new forces in the economy, namely the advent of artificial intelligence, explosion of data centers, and the green energy revolution, are boosting demand for copper, while the development of new weapons is adding to it as well, according to Jeff Currie, chief strategy officer of Energy Pathways at Carlyle.

Copper is the new oil, he told Bloomberg TV on Tuesday, noting that his conversations with traders also reinforce his bullishness. It is the highest-conviction trade Ive ever seen.

Copper has long been a key industrial bellwether as its uses range widely from manufacturing and construction to electronics and other high-tech products.

But billions of dollars pouring into artificial intelligence and renewable energy are a relatively new part of coppers outlook, Currie noted, acknowledging that he made a similar prediction in 2021 when he was an analyst at Goldman Sachs.

Im confident that this time is lift-off, and I think were going to see more momentum behind it, he said. Whats different this time is there are now three sources of demandAI, green energy, and the militaryinstead of just green energy three years ago.

And while demand is high, supply remains tight as bringing new copper mines online can take 12 to 26 years, Currie pointed out.

That should eventually send prices soaring to $15,000 per ton, he predicted. Coppers prices are already at record highs, with benchmark prices in London at about $10,000 per ton, more than doubling from the pandemic-era lows in early 2020.

At some point, the price will get so high that it will create demand destruction, meaning buyers balk at paying so much. But Currie doesnt know what that level is.

But I go back to the 2000s, I was bullish on oil then as I am on copper today, he added, recalling that crude shot up from $20 to $140 per barrel at the time. So the upside on copper here is very significant.

Copper was also a key catalyst in BHPs proposed a takeover of Anglo American, a $40 billion deal that would create the worlds topcopper producer. But Anglo has rejected the offer and recently announced plans to restructure the group, including selling its diamond business De Beers.

Go here to see the original:

'Copper is the new oil,' and prices could soar 50% as AI, green energy, and military spending boost demand, top ... - Fortune

Business school teaching case study: risks of the AI arms race – Financial Times

Unlock the Editors Digest for free

Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.

Prabhakar Raghavan, Googles search chief, was preparing for the Paris launch of its much-anticipated artificial intelligence chatbot in February last year when he received some unpleasant news.

Two days earlier, his chief executive, Sundar Pichai, had boasted that the chatbot, Bard, draws on information from the web to provide fresh, high-quality responses. But, within hours of Google posting a short gif video on Twitter demonstrating Bard in action, observers spotted that the bot had given a wrong answer.

Bards response to What new discoveries from the James Webb Space Telescope (JWST) can I tell my 9-year-old about? was that the telescope had taken the very first pictures of a planet outside the Earths solar system. In fact, those images were generated by the European Southern Observatorys Very Large Telescope nearly two decades before. It was an error that harmed Bards credibility and wiped $100bn off the market value of Googles parent company, Alphabet.

The incident highlighted the dangers in the high-pressure arms race around AI. It has the potential to improve accuracy, efficiency and decision-making. However, while developers are expected to have clear boundaries for what they will do and to act responsibly when bringing technology to the market, the temptation is to prioritise profit over reliability.

The genesis of the AI arms race can be traced back to 2019, when Microsoft chief executive Satya Nadella realised that the AI-powered auto-complete function Googles in Gmail was becoming so effective that his own company was at risk of being left behind in AI development.

This article is part of a collection of instant teaching case studies exploring business challenges. Read the piece then consider the questions at the end.

About the author: David De Cremer is the Dunton Family Dean and a professor of management and technology at DAmore-McKim School of Business at Northeastern University in Boston. He is author of The AI-Savvy Leader: 9 ways to take back control and make AI work (Harvard Business Review Press, 2024).

Technology start-up OpenAI, which needed external capital to secure additional computing resources, provided an opportunity. Nadella quietly made an initial $1bn investment. He believed that a collaboration between the two companies would allow Microsoft to commercialise OpenAIs future discoveries, making Google dance and eating into its dominant market share. He was soon proved right.

Microsofts swift integration of OpenAIs ChatGPT into Bing marked a strategic coup, projecting an image of technological ascendancy over Google. In an effort not to be left behind, Google rushed to release its own chatbot even though the company knewthat Bard was not ready to compete with ChatGPT. Its haste-driven error cost Alphabet $100bn in market capitalisation.

Nowadays, it seems the prevailing modus operandi in the tech industry is a myopic fixation on pioneering ever-more-sophisticated AI software. Fear of missing out compels companies to rush unfinished products to market, disregarding inherent risks and costs. Meta, for example, recently confirmed its intention to double down in the AI arms race, despite rising costs and a nearly 12 per cent drop in its share price.

There appears to be a conspicuous absence of purpose-driven initiatives, with a focus on profit eclipsing societal welfare considerations. Tesla rushed to launch its AI-based Fully Self Driving (FSD) features, for example, with technology nowhere near the maturity needed for safe deployment on roads. FSD, with driver inattention, has been linkedto hundreds of crashes and dozens of deaths.

Recommended

As a result, Tesla has had to recall more than 2mn vehicles because of FSD/autopilot issues. Despite identifying concerns about drivers ability to reverse necessary software updates, regulators argue that Tesla did not make those suggested changes part of the recall.

Compounding the issue is the proliferation of sub-par so-so technologies. For example, two new GenAI-based portable gadgets, Rabbit R1 and Humane AI Pin, triggered a backlash, accused of being unusable, overpriced, and not solving any meaningful problem.

Unfortunately, this trend will not slow: driven by a desire to capitalise as quickly as possible on incremental improvements of ChatGPT,somestart-ups are rushing to launch so-so GenAI-based hardware devices.They appear to show little interest in whether a market exists; the goal seems to be winning any possible AI race available, regardless of whether it adds value for end users. In response, OpenAI has warned start-ups to stop engaging in an opportunistic and short-term strategy of pursuing purposeless innovations and noted that more powerful versions of ChatGPT are coming that can easily replicate any GPT-based apps that the start-ups are launching.

In response, governments are preparing regulations to govern AI development and deployment. Some tech companies are responding with greater responsibility. A recent open lettersigned by industry leaders endorsed the idea that: It is our collective responsibility to make choices that maximise AIs benefits and mitigate the risks, for today and for the future generations.

As the tech industry grapples with the ethical and societal implications of AI proliferation, some consultants, customers and external groups are making the case for purpose-driven innovation. While regulators offer a semblance of oversight, progress will require industry stakeholders to take responsibility for fostering an ecosystem that gives greater priority to societal welfare.

Do tech companies bear responsibility for how businesses deploy artificial intelligence in possibly wrong and unethical ways?

What strategies can tech companies follow to keep purpose centre stage and see profit as an outcome of purpose?

Should bringing AI to market be more regulated? And if so, how?

How do you predict that the tendency to race to the bottom will play out in the next five to 10 years in businesses working with AI? Which factors are most important?

What risks for companies are associated with not joining the race to the bottom in AI development? How can these risks be managed by adopting a more purpose-driven strategy? What factors are important in that scenario?

See the article here:

Business school teaching case study: risks of the AI arms race - Financial Times