Archive for the ‘Ai’ Category

ServiceNow and NVIDIA Announce Partnership to Build Generative … – NVIDIA Blog

Built on ServiceNow Platform With NVIDIA AI Software and DGX Infrastructure, Custom Large Language Models to Bring Intelligent Workflow Automation to Enterprises

Knowledge 2023ServiceNow and NVIDIA today announced a partnership to develop powerful, enterprise-grade generative AI capabilities that can transform business processes with faster, more intelligent workflow automation.

Using NVIDIA software, services and accelerated infrastructure, ServiceNow is developing custom large language models trained on data specifically for its ServiceNow Platform, the intelligent platform for end-to-end digital transformation.

This will expand ServiceNows already extensive AI functionality with new uses for generative AI across the enterprise including for IT departments, customer service teams, employees and developers to strengthen workflow automation and rapidly increase productivity.

ServiceNow is also helping NVIDIA streamline its IT operations with these generative AI tools, using NVIDIA data to customize NVIDIA NeMo foundation models running on hybrid-cloud infrastructure consisting of NVIDIA DGX Cloud and on-premises NVIDIA DGX SuperPOD AI supercomputers.

IT is the nervous system of every modern enterprise in every industry, said Jensen Huang, founder and CEO of NVIDIA. Our collaboration to build super-specialized generative AI for enterprises will boost the capability and productivity of IT professionals worldwide using the ServiceNow platform.

As adoption of generative AI continues to accelerate, organizations are turning to trusted vendors with battle-tested, secure AI capabilities to boost productivity, gain a competitive edge, and keep data and IP secure, said CJ Desai, president and chief operating officer of ServiceNow. Together, NVIDIA and ServiceNow will help drive new levels of automation to fuel productivity and maximize business impact."

Harnessing Generative AI to Reshape Digital BusinessServiceNow and NVIDIA are exploring a number of generative AI use cases to simplify and improve productivity across the enterprise by providing high accuracy and higher value in IT.

This includes developing intelligent virtual assistants and agents to help quickly resolve a broad range of user questions and support requests with purpose-built AI chatbots that use large language models and focus on defined IT tasks.

To simplify the user experience, enterprises can customize chatbots with proprietary data to create a central generative AI resource that stays on topic while resolving many different requests.

These generative AI use cases are also applicable to customer service agents, allowing for case prioritization with greater accuracy, saving time and improving outcomes. Customer service teams can use generative AI for automatic issue resolution, knowledge-base article generation based on customer case summaries, and chat summarization for faster hand-off, resolution and wrap-up.

In addition, generative AI can improve the employee experience by helping identify growth opportunities. For example, delivering customized learning and development recommendations, like courses and mentors, based on natural language queries and information from an employees profile.

Full-Stack NVIDIA Generative AI Software and Infrastructure Fuel Rapid DevelopmentIn its generative AI research and development, ServiceNow is using NVIDIA AI Foundations cloud services and the NVIDIA AI Enterprise software platform, which includes the NVIDIA NeMo framework.

Included in NeMo are prompt tuning, supervised fine-tuning and knowledge retrieval tools to help developers build, customize and deploy language models for enterprise use cases. NeMo Guardrails software is also included and enables developers to easily add topical, safety and security features for AI chatbots.

More here:

ServiceNow and NVIDIA Announce Partnership to Build Generative ... - NVIDIA Blog

Artificial intelligence: Revealed – how many firms are already using AI… and how workers feel about it – Sky News

By Sarah Taaffe-Maguire, Business reporter @taaffems

Friday 19 May 2023 04:17, UK

Employees are more fearful and distrustful of artificial intelligence in the workplace than their employers - as businesses see cost savings as the main benefit of the technology.

A UK-wide survey of attitudes and preparedness for AI, by recruitment giant Hays, found close to a third of employees say they don't have the right skills to make best use of the technology - but firms have already begun adopting it.

Some 56% of employers think AI should be embraced in the workplace, while just 8% said it should be feared.

Read more: AI to hit workplace 'like a freight train'

But among employees, just 49% believe artificial intelligence should be adopted - with 13% concerned about its impact.

Currently, 21% of organisations say they are already using AI tools like ChatGPT - and 27% are investing in training for staff to upskill in AI tools and technologies.

The main benefits of AI - identified by employers - were cost savings, process efficiencies and improved productivity.

At the same time, 55% of workers say their employer isn't helping them prepare for the use of AI at work.

This is a limited version of the story so unfortunately this content is not available. Open the full version

Click to subscribe to the Sky News Daily wherever you get your podcasts

The survey indicates that more companies will adopt AI - and just 18% say they intend to ban it, with 3% already prohibiting its use.

The majority - 66% - say they will allow the technology in their workplace but will monitor how it is used.

The greatest take up of AI, according to the survey, was in marketing.

Over a third (37%) of marketing professionals say they have used an AI tool in their current role. They were followed by 30% of professionals working in tech, 23% of professionals working in architecture and 17% of those working in sales.

For businesses not using AI, the top reason listed was a lack of awareness or understanding of the benefits.

The survey results follow the announcement by BT that 55,000 jobs are to be cut before 2030 with AI replacing 10,000 roles.

This is a limited version of the story so unfortunately this content is not available. Open the full version

BT has revealed plans to significantly reduce the number of people working for the telecoms group as part of efforts to cut costs and bolster profitability, with AI due to replace thousands of roles. The company said it hoped the roles would be lost through natural attrition rather than redundancy.

The telecoms company added it would use AI to deliver better customer service and capture other business opportunities.

Unions have also expressed concern for workers' rights with the expansion of AI into the workplace and have called for tighter regulation.

Read the rest here:

Artificial intelligence: Revealed - how many firms are already using AI... and how workers feel about it - Sky News

Wisconsin Police Department Warns of New Artificial Intelligence Phone Scam – NBC Chicago

A police department in southern Wisconsin is warning residents about a new scam in which swindlers clone a relative's voice in an attempt to appear legitimate.

In a Facebook post on May 8, the Beloit Police Department said it received a report from a resident who provided money to someone who "sounded like their relative." While police aren't able to say for certain if the scam used artificial intelligence, they did say that "we want our community to be aware that this technology is out there."

AI scams have recently increased, so much so that the Senate Special Committee on Aging sent a letter to the Federal Trade Commission on Friday, requesting information on the agency's efforts to protect older Americans from such scams, according to a news release.

These scams are easier to pull of that one might think - all scammers need is a short audio clip of your loved one's voice and a voice-cloning program.

Oftentimes scam victims may receive calls from people claiming to be relatives who have been kidnapped, landed in jail or have been involved in an accident and are in desperate need of money.

So, how do you know if it's actually your family member or a scammer who has cloned their voice?

First, call the person who supposedly contacted you and verify the story, according to the Federal Trade Commission. Make sure to use a phone number you know is theirs. If you cant reach your loved one, try to get in touch with them through another relative or friends.

Scammers often ask for victims towire money, sendcryptocurrency, orbuy gift cardsand give them the card numbers and PINs. So, if any of those requests are made, you might have gotten involved in a scam.

To help prevent AI scams, check privacy settings on social media accounts and double check which information you publicize on those accounts. The more information that is publicly available, the more scammers can use to convince someone they are legitimate.

Read more:

Wisconsin Police Department Warns of New Artificial Intelligence Phone Scam - NBC Chicago

Shell to use new AI technology in deep sea oil exploration – Reuters

NEW YORK, May 17 (Reuters) - Shell Plc (SHEL.L) will use AI-based technology from big-data analytics firm SparkCognition in its deep sea exploration and production to boost offshore oil output, the companies said on Wednesday.

SparkCognition's AI algorithms will process and analyze large amounts of seismic data in the hunt for new oil reservoirs by Shell, the largest oil producer in the U.S. Gulf of Mexico.

"We are committed to finding new and innovative ways to reinvent our exploration ways of working," Gabriel Guerra, Shell's vice president of innovation and performance, said in a statement.

The goal is to improve operational efficiency and speed, and increase production and success in exploration. The new process can shorten explorations to less than nine days from nine months, the companies said.

"Generative AI for seismic imaging can positively disrupt the exploration process and has broad and far-reaching implications," said Bruce Porter, chief science officer for Austin, Texas-based SparkCognition.

The technology would generate subsurface images using fewer seismic data scans than usual, helping with deep sea preservation, the companies said. Seismic technology sends sound waves to explore subsurface areas.

Fewer seismic surveys accelerate exploration workflow and would save costs in high-performance computing, they added.

Reporting by Stephanie Kelly; Editing by Richard Chang

Our Standards: The Thomson Reuters Trust Principles.

Thomson Reuters

A New-York-based correspondent covering the U.S. crude market and member of the energy team since 2018 covering the oil and fuel markets as well as federal policy around renewable fuels.Contact: 646-737-4649

Read more from the original source:

Shell to use new AI technology in deep sea oil exploration - Reuters

AI presents political peril for 2024 with threat to mislead voters – The Associated Press

WASHINGTON (AP) Computer engineers and tech-inclined political scientists have warned for years that cheap, powerful artificial intelligence tools would soon allow anyone to create fake images, video and audio that was realistic enough to fool voters and perhaps sway an election.

The synthetic images that emerged were often crude, unconvincing and costly to produce, especially when other kinds of misinformation were so inexpensive and easy to spread on social media. The threat posed by AI and so-called deepfakes always seemed a year or two away.

No more.

Sophisticated generative AI tools can now create cloned human voices and hyper-realistic images, videos and audio in seconds, at minimal cost. When strapped to powerful social media algorithms, this fake and digitally created content can spread far and fast and target highly specific audiences, potentially taking campaign dirty tricks to a new low.

The implications for the 2024 campaigns and elections are as large as they are troubling: Generative AI can not only rapidly produce targeted campaign emails, texts or videos, it also could be used to mislead voters, impersonate candidates and undermine elections on a scale and at a speed not yet seen.

Were not prepared for this, warned A.J. Nash, vice president of intelligence at the cybersecurity firm ZeroFox. To me, the big leap forward is the audio and video capabilities that have emerged. When you can do that on a large scale, and distribute it on social platforms, well, its going to have a major impact.

AI experts can quickly rattle off a number of alarming scenarios in which generative AI is used to create synthetic media for the purposes of confusing voters, slandering a candidate or even inciting violence.

Here are a few: Automated robocall messages, in a candidates voice, instructing voters to cast ballots on the wrong date; audio recordings of a candidate supposedly confessing to a crime or expressing racist views; video footage showing someone giving a speech or interview they never gave. Fake images designed to look like local news reports, falsely claiming a candidate dropped out of the race.

What if Elon Musk personally calls you and tells you to vote for a certain candidate? said Oren Etzioni, the founding CEO of the Allen Institute for AI, who stepped down last year to start the nonprofit AI2. A lot of people would listen. But its not him.

Former President Donald Trump, who is running in 2024, has shared AI-generated content with his followers on social media. A manipulated video of CNN host Anderson Cooper that Trump shared on his Truth Social platform on Friday, which distorted Coopers reaction to the CNN town hall this past week with Trump, was created using an AI voice-cloning tool.

A dystopian campaign ad released last month by the Republican National Committee offers another glimpse of this digitally manipulated future. The online ad, which came after President Joe Biden announced his reelection campaign, and starts with a strange, slightly warped image of Biden and the text What if the weakest president weve ever had was re-elected?

A series of AI-generated images follows: Taiwan under attack; boarded up storefronts in the United States as the economy crumbles; soldiers and armored military vehicles patrolling local streets as tattooed criminals and waves of immigrants create panic.

An AI-generated look into the countrys possible future if Joe Biden is re-elected in 2024, reads the ads description from the RNC.

The RNC acknowledged its use of AI, but others, including nefarious political campaigns and foreign adversaries, will not, said Petko Stoyanov, global chief technology officer at Forcepoint, a cybersecurity company based in Austin, Texas. Stoyanov predicted that groups looking to meddle with U.S. democracy will employ AI and synthetic media as a way to erode trust.

What happens if an international entity a cybercriminal or a nation state impersonates someone. What is the impact? Do we have any recourse? Stoyanov said. Were going to see a lot more misinformation from international sources.

AI-generated political disinformation already has gone viral online ahead of the 2024 election, from a doctored video of Biden appearing to give a speech attacking transgender people to AI-generated images of children supposedly learning satanism in libraries.

AI images appearing to show Trumps mug shot also fooled some social media users even though the former president didnt take one when he was booked and arraigned in a Manhattan criminal court for falsifying business records. Other AI-generated images showed Trump resisting arrest, though their creator was quick to acknowledge their origin.

Legislation that would require candidates to label campaign advertisements created with AI has been introduced in the House by Rep. Yvette Clarke, D-N.Y., who has also sponsored legislation that would require anyone creating synthetic images to add a watermark indicating the fact.

Some states have offered their own proposals for addressing concerns about deepfakes.

Clarke said her greatest fear is that generative AI could be used before the 2024 election to create a video or audio that incites violence and turns Americans against each other.

Its important that we keep up with the technology, Clarke told The Associated Press. Weve got to set up some guardrails. People can be deceived, and it only takes a split second. People are busy with their lives and they dont have the time to check every piece of information. AI being weaponized, in a political season, it could be extremely disruptive.

Earlier this month, a trade association for political consultants in Washington condemned the use of deepfakes in political advertising, calling them a deception with no place in legitimate, ethical campaigns.

Other forms of artificial intelligence have for years been a feature of political campaigning, using data and algorithms to automate tasks such as targeting voters on social media or tracking down donors. Campaign strategists and tech entrepreneurs hope the most recent innovations will offer some positives in 2024, too.

Mike Nellis, CEO of the progressive digital agency Authentic, said he uses ChatGPT every single day and encourages his staff to use it, too, as long as any content drafted with the tool is reviewed by human eyes afterward.

Nellis newest project, in partnership with Higher Ground Labs, is an AI tool called Quiller. It will write, send and evaluate the effectiveness of fundraising emails - all typically tedious tasks on campaigns.

The idea is every Democratic strategist, every Democratic candidate will have a copilot in their pocket, he said.

___

Swenson reported from New York.

___

The Associated Pressreceives support from several private foundations to enhance its explanatory coverage of elections and democracy. See more about APs democracy initiative here. The AP is solely responsible for all content.

___

Follow the APs coverage of misinformation at https://apnews.com/hub/misinformation and coverage of artificial intelligence at https://apnews.com/hub/artificial-intelligence

Read the original:

AI presents political peril for 2024 with threat to mislead voters - The Associated Press