Archive for the ‘Ai’ Category

Shell to use new AI technology in deep sea oil exploration – Reuters

NEW YORK, May 17 (Reuters) - Shell Plc (SHEL.L) will use AI-based technology from big-data analytics firm SparkCognition in its deep sea exploration and production to boost offshore oil output, the companies said on Wednesday.

SparkCognition's AI algorithms will process and analyze large amounts of seismic data in the hunt for new oil reservoirs by Shell, the largest oil producer in the U.S. Gulf of Mexico.

"We are committed to finding new and innovative ways to reinvent our exploration ways of working," Gabriel Guerra, Shell's vice president of innovation and performance, said in a statement.

The goal is to improve operational efficiency and speed, and increase production and success in exploration. The new process can shorten explorations to less than nine days from nine months, the companies said.

"Generative AI for seismic imaging can positively disrupt the exploration process and has broad and far-reaching implications," said Bruce Porter, chief science officer for Austin, Texas-based SparkCognition.

The technology would generate subsurface images using fewer seismic data scans than usual, helping with deep sea preservation, the companies said. Seismic technology sends sound waves to explore subsurface areas.

Fewer seismic surveys accelerate exploration workflow and would save costs in high-performance computing, they added.

Reporting by Stephanie Kelly; Editing by Richard Chang

Our Standards: The Thomson Reuters Trust Principles.

Thomson Reuters

A New-York-based correspondent covering the U.S. crude market and member of the energy team since 2018 covering the oil and fuel markets as well as federal policy around renewable fuels.Contact: 646-737-4649

Read more from the original source:

Shell to use new AI technology in deep sea oil exploration - Reuters

AI presents political peril for 2024 with threat to mislead voters – The Associated Press

WASHINGTON (AP) Computer engineers and tech-inclined political scientists have warned for years that cheap, powerful artificial intelligence tools would soon allow anyone to create fake images, video and audio that was realistic enough to fool voters and perhaps sway an election.

The synthetic images that emerged were often crude, unconvincing and costly to produce, especially when other kinds of misinformation were so inexpensive and easy to spread on social media. The threat posed by AI and so-called deepfakes always seemed a year or two away.

No more.

Sophisticated generative AI tools can now create cloned human voices and hyper-realistic images, videos and audio in seconds, at minimal cost. When strapped to powerful social media algorithms, this fake and digitally created content can spread far and fast and target highly specific audiences, potentially taking campaign dirty tricks to a new low.

The implications for the 2024 campaigns and elections are as large as they are troubling: Generative AI can not only rapidly produce targeted campaign emails, texts or videos, it also could be used to mislead voters, impersonate candidates and undermine elections on a scale and at a speed not yet seen.

Were not prepared for this, warned A.J. Nash, vice president of intelligence at the cybersecurity firm ZeroFox. To me, the big leap forward is the audio and video capabilities that have emerged. When you can do that on a large scale, and distribute it on social platforms, well, its going to have a major impact.

AI experts can quickly rattle off a number of alarming scenarios in which generative AI is used to create synthetic media for the purposes of confusing voters, slandering a candidate or even inciting violence.

Here are a few: Automated robocall messages, in a candidates voice, instructing voters to cast ballots on the wrong date; audio recordings of a candidate supposedly confessing to a crime or expressing racist views; video footage showing someone giving a speech or interview they never gave. Fake images designed to look like local news reports, falsely claiming a candidate dropped out of the race.

What if Elon Musk personally calls you and tells you to vote for a certain candidate? said Oren Etzioni, the founding CEO of the Allen Institute for AI, who stepped down last year to start the nonprofit AI2. A lot of people would listen. But its not him.

Former President Donald Trump, who is running in 2024, has shared AI-generated content with his followers on social media. A manipulated video of CNN host Anderson Cooper that Trump shared on his Truth Social platform on Friday, which distorted Coopers reaction to the CNN town hall this past week with Trump, was created using an AI voice-cloning tool.

A dystopian campaign ad released last month by the Republican National Committee offers another glimpse of this digitally manipulated future. The online ad, which came after President Joe Biden announced his reelection campaign, and starts with a strange, slightly warped image of Biden and the text What if the weakest president weve ever had was re-elected?

A series of AI-generated images follows: Taiwan under attack; boarded up storefronts in the United States as the economy crumbles; soldiers and armored military vehicles patrolling local streets as tattooed criminals and waves of immigrants create panic.

An AI-generated look into the countrys possible future if Joe Biden is re-elected in 2024, reads the ads description from the RNC.

The RNC acknowledged its use of AI, but others, including nefarious political campaigns and foreign adversaries, will not, said Petko Stoyanov, global chief technology officer at Forcepoint, a cybersecurity company based in Austin, Texas. Stoyanov predicted that groups looking to meddle with U.S. democracy will employ AI and synthetic media as a way to erode trust.

What happens if an international entity a cybercriminal or a nation state impersonates someone. What is the impact? Do we have any recourse? Stoyanov said. Were going to see a lot more misinformation from international sources.

AI-generated political disinformation already has gone viral online ahead of the 2024 election, from a doctored video of Biden appearing to give a speech attacking transgender people to AI-generated images of children supposedly learning satanism in libraries.

AI images appearing to show Trumps mug shot also fooled some social media users even though the former president didnt take one when he was booked and arraigned in a Manhattan criminal court for falsifying business records. Other AI-generated images showed Trump resisting arrest, though their creator was quick to acknowledge their origin.

Legislation that would require candidates to label campaign advertisements created with AI has been introduced in the House by Rep. Yvette Clarke, D-N.Y., who has also sponsored legislation that would require anyone creating synthetic images to add a watermark indicating the fact.

Some states have offered their own proposals for addressing concerns about deepfakes.

Clarke said her greatest fear is that generative AI could be used before the 2024 election to create a video or audio that incites violence and turns Americans against each other.

Its important that we keep up with the technology, Clarke told The Associated Press. Weve got to set up some guardrails. People can be deceived, and it only takes a split second. People are busy with their lives and they dont have the time to check every piece of information. AI being weaponized, in a political season, it could be extremely disruptive.

Earlier this month, a trade association for political consultants in Washington condemned the use of deepfakes in political advertising, calling them a deception with no place in legitimate, ethical campaigns.

Other forms of artificial intelligence have for years been a feature of political campaigning, using data and algorithms to automate tasks such as targeting voters on social media or tracking down donors. Campaign strategists and tech entrepreneurs hope the most recent innovations will offer some positives in 2024, too.

Mike Nellis, CEO of the progressive digital agency Authentic, said he uses ChatGPT every single day and encourages his staff to use it, too, as long as any content drafted with the tool is reviewed by human eyes afterward.

Nellis newest project, in partnership with Higher Ground Labs, is an AI tool called Quiller. It will write, send and evaluate the effectiveness of fundraising emails - all typically tedious tasks on campaigns.

The idea is every Democratic strategist, every Democratic candidate will have a copilot in their pocket, he said.

___

Swenson reported from New York.

___

The Associated Pressreceives support from several private foundations to enhance its explanatory coverage of elections and democracy. See more about APs democracy initiative here. The AP is solely responsible for all content.

___

Follow the APs coverage of misinformation at https://apnews.com/hub/misinformation and coverage of artificial intelligence at https://apnews.com/hub/artificial-intelligence

Read the original:

AI presents political peril for 2024 with threat to mislead voters - The Associated Press

Goldman Sachs says A.I. could push S&P 500 profits up by 30% in the next decade – CNBC

Over the next 10 years, AI could increase productivity by 1.5 percent per year. And that could increase S&P500 profits by 30 percent or more over the next decade, Goldman Sachs says.

Nurphoto | Nurphoto | Getty Images

Goldman Sachs is bullish about artificial intelligence and believes the technology could help drive S&P 500 profits in the next 10 years.

"Over the next 10 years, AI could increase productivity by 1.5% per year. And that could increase S&P500 profits by 30% or more over the next decade," Goldman's senior strategist Ben Snider told CNBC Thursday.

The emergence of ChatGPT, the chatbot developed by OpenAI, has spurred a firestorm of interest in AI and the possible disruptions to the daily lives of many. It has also injected fresh excitement among investors eager for a fresh driver of profit growth at a time when rising borrowing costs and supply chain problems have tempered optimism.

"A lot of the favorable factors that led to that expansion (of S&P 500) earnings seem to be reversing," Snider told CNBC on "Asia Squawk Box."

"But the real source of optimism now is productivity enhancements through artificial intelligence."

"It's clear to most investors that the immediate winners are in the technology sector," Snider added. "The real question for investors is who are going to be winners down the road."

He pointed out that "in 1999 or 2000 during the tech bubble, it would be very hard to envision Facebook or Uber changing the way we live our lives."

Snider recommended that investors should spread their U.S. equity investments in cyclical and defensive sectors, touting the energy and the health-care sectors for their attractive valuations.

In the shorter term, he said he expects the U.S. Federal Reserve has completed most of its monetary policy tightening.

"The question is: In which ways will that continue to affect the economy moving forward?" Snider said. "One sign of concern in the recent earnings season is that S&P 500 companies are starting to pull back a bit on corporate spending."

Elevated interest rates could be one reason, he said.

"If interest rates are high, as a company, you might be a little more averse to issuing debt and therefore you might pull back on your spending. And indeed if we look at S&P 500 buybacks, they were down 20% year-over-year in the first quarter of this year that is one sign perhaps we haven't seen all the effects of this tightening cycle."

Here is the original post:

Goldman Sachs says A.I. could push S&P 500 profits up by 30% in the next decade - CNBC

AI creator on the risks, opportunities and how it may make humans ‘boring’ – BBC

13 May 2023

To play this content, please enable JavaScript, or try a different browser

AI boss: Worst case scenario it could control humanity

"Humans are a bit boring - it will be like, goodbye!" That's the personal prediction - that artificial intelligence (AI) will supplant humans in many roles - from one of the most important people you've probably never heard of.

Emad Mostaque is the British founder of the tech firm, Stability AI. It popularised Stable Diffusion, a tool that uses AI to make images from simple text instructions by analysing images found online.

AI enables a computer to think or act more like a human. It includes what's called machine learning, when computers can learn what to do without being giving exact instructions by a human sitting at a keyboard tapping in commands. Last month, there was a dramatic warning from 1,000 experts to press pause on its development, warning of potential risks, and saying the race to develop AI systems is out of control.

In an interview we'll show in full on Sunday, tech founder Mostaque questions what will happen "if we have agents more capable than us that we cannot control, that are going across the internet and they achieve a level of automation; what does that mean?

"The worst case scenario is that it proliferates and basically it controls humanity."

That sounds terrifying, but he is not alone in pointing out the risk, that if we create computers smarter than ourselves we just can't be sure what will happen next.

Mostaque believes governments could soon be shocked into taking action by an event that makes the risks suddenly real. He points to the moment Tom Hanks contracted Covid-19 and millions sat up and paid attention.

When a moment like that arrives, governments will conclude "we need policy now", the 40-year-old says.

There's been a spike in concern for example after a Republican attack advert on Jo Biden was created using fake computer generated images.

When there's a risk to information that voters can trust, that's something governments have to respond to, says Mostaque.

Despite his concerns, Mostaque says that the potential benefits of AI for almost every part of our lives could be huge. Yet he concedes that the effect on jobs could be painful, at least at the start.

Mostaque says he believes AI "will be a bigger economic impact than the pandemic", adding that "it's up to us to decide which direction" this all goes in.

Image source, Getty Images

AI could lead to 300m job losses according to one prediction.

Some jobs will undoubtedly disappear, the bank Goldman Sachs suggested an almost incomprehensible 300m roles could be lost or diminished by the advancing technology.

While no one wants to be replaced by a robot, Mostaque's hope is that better jobs could be created because "productivity increases will balance out" and humans can concentrate on the things that make us human, and let machines do more of the rest. He agrees with the UK's former chief scientific advisor, Sir Patrick Vallance, that the advance of AI and its impacts could prove even bigger than the industrial revolution.

Mostaque is an unassuming mathematician, the founder of a company he only started in 2020 that has already been valued at $1bn, and with more cash flooding in, including from Hollywood star Ashton Kutcher, is likely to be soon worth very much more. Some speculation has put the value as high $4bn.

Unlike some of his competitors he is determined his technology will remain open source - in other words anyone can look at the code, share it, and use it. In his view, that's what should give the public a level of confidence in what's going on.

"I think there shouldn't have to be a need for trust," he says.

"If you build open models and you do it in the open, you should be criticised if you do things wrong and hopefully lauded if you do some things right."

But his business also raises profound questions about ownership, and what's real. There's legal action underway against them by the photo agency Getty Images which claims the rights to the images it sells have been infringed.

Image source, Getty Images

In response, Mostaque says: "What if you have a robot that's walking around and looking at things, do you have to close its eyes if it sees anything?"

That's hardly likely to be the end of that conversation.

The entrepreneur is convinced that the scale of what's coming is enormous. He reckons that in 10 years time, his company and fellow AI leaders, ChatGPT and DeepMind, will even be bigger than Google and Facebook. Predictions about technology are as tricky as predictions about politics - educated guesses that could turn out to be totally wrong. But what is clear is that a public conversation about the risks and realities of AI is now underway. We might be on the cusp of sweeping changes too big for any one company, country or politician to manage.

The first steam train puffed along the tracks in Darlington more than 50 years after the steam engine was patented by James Watt. This time we're unlikely to have anything like as long to get used to these new ideas, and it's unlikely to be boring!

You can watch much more of our conversation with Emad Mostaque on tomorrow's Sunday with Laura Kuenssberg live on BBC One or here on iPlayer.

Follow this link:

AI creator on the risks, opportunities and how it may make humans 'boring' - BBC

Ashton Kutcher raised a $243 million investment fund in just five weeks that will focus on the next absolute transformation in tech – Fortune

Ashton Kutcher, the Hollywood actor and venture capital investor, raised the money for his firms new AI fund quickly.

We pulled the fund together in about five weeks, Kutcher said Thursday in a Bloomberg Television interview. We have a base of LPs that have been with us for years on end.

Kutchers new fund plans to put $243 million toward artificial intelligence startups, the tech industrys current hottest category. The portfolio already includes investments in AI startup darlings OpenAI, Stability AI Ltd. and Anthropic.

With the new fund, assets under management at Los Angeles-based Sound Ventures LLC are about $1 billion, the firm said. Kutcher said the firm had surveyed its portfolio companies to see how they were embracing AI, and that the sector would mark the next absolute transformation for technology.

Weve been investing in AI for the last seven years, Kutcher said. But when we saw GPT be launched, we realized that this was an absolute breakthrough.

He acknowledged the so-called hype cycle that washes across technology investing, most recently with the rush into crypto, a field where Sound Ventures also has been active. The blockchain technology at the heart of cryptocurrency has value in a number of applications, he said, while tokenization in many areas went too far.

Regulation of AI is needed badly, he said, just as it is in the crypto industry.

Here is the original post:

Ashton Kutcher raised a $243 million investment fund in just five weeks that will focus on the next absolute transformation in tech - Fortune