Archive for the ‘Ai’ Category

Another Side of the A.I. Boom: Detecting What A.I. Makes – The New York Times

Andrey Doronichev was alarmed last year when he saw a video on social media that appeared to show the president of Ukraine surrendering to Russia.

The video was quickly debunked as a synthetically generated deepfake, but to Mr. Doronichev, it was a worrying portent. This year, his fears crept closer to reality, as companies began competing to enhance and release artificial intelligence technology despite the havoc it could cause.

Generative A.I. is now available to anyone, and its increasingly capable of fooling people with text, audio, images and videos that seem to be conceived and captured by humans. The risk of societal gullibility has set off concerns about disinformation, job loss, discrimination, privacy and broad dystopia.

For entrepreneurs like Mr. Doronichev, it has also become a business opportunity. More than a dozen companies now offer tools to identify whether something was made with artificial intelligence, with names like Sensity AI (deepfake detection), Fictitious.AI (plagiarism detection) and Originality.AI (also plagiarism).

Mr. Doronichev, a Russian native, founded a company in San Francisco, Optic, to help identify synthetic or spoofed material to be, in his words, an airport X-ray machine for digital content.

In March, it unveiled a website where users can check images to see if they were made by actual photographs or artificial intelligence. It is working on other services to verify video and audio.

Content authenticity is going to become a major problem for society as a whole, said Mr. Doronichev, who was an investor for a face-swapping app called Reface. Were entering the age of cheap fakes. Since it doesnt cost much to produce fake content, he said, it can be done at scale.

The overall generative A.I. market is expected to exceed $109 billion by 2030, growing 35.6 percent a year on average until then, according to the market research firm Grand View Research. Businesses focused on detecting the technology are a growing part of the industry.

Months after being created by a Princeton University student, GPTZero claims that more than a million people have used its program to suss out computer-generated text. Reality Defender was one of 414 companies chosen from 17,000 applications to be funded by the start-up accelerator Y Combinator this winter.

Copyleaks raised $7.75 million last year in part to expand its anti-plagiarism services for schools and universities to detect artificial intelligence in students work. Sentinel, whose founders specialized in cybersecurity and information warfare for the British Royal Navy and the North Atlantic Treaty Organization, closed a $1.5 million seed round in 2020 that was backed in part by one of Skypes founding engineers to help protect democracies against deepfakes and other malicious synthetic media.

Major tech companies are also involved: Intels FakeCatcher claims to be able to identify deepfake videos with 96 percent accuracy, in part by analyzing pixels for subtle signs of blood flow in human faces.

Within the federal government, the Defense Advanced Research Projects Agency plans to spend nearly $30 million this year to run Semantic Forensics, a program that develops algorithms to automatically detect deepfakes and determine whether they are malicious.

Even OpenAI, which turbocharged the A.I. boom when it released its ChatGPT tool late last year, is working on detection services. The company, based in San Francisco, debuted a free tool in January to help distinguish between text composed by a human and text written by artificial intelligence.

OpenAI stressed that while the tool was an improvement on past iterations, it was still not fully reliable. The tool correctly identified 26 percent of artificially generated text but falsely flagged 9 percent of text from humans as computer generated.

The OpenAI tool is burdened with common flaws in detection programs: It struggles with short texts and writing that is not in English. In educational settings, plagiarism-detection tools such as TurnItIn have been accused of inaccurately classifying essays written by students as being generated by chatbots.

Detection tools inherently lag behind the generative technology they are trying to detect. By the time a defense system is able to recognize the work of a new chatbot or image generator, like Google Bard or Midjourney, developers are already coming up with a new iteration that can evade that defense. The situation has been described as an arms race or a virus-antivirus relationship where one begets the other, over and over.

When Midjourney releases Midjourney 5, my starter gun goes off, and I start working to catch up and while Im doing that, theyre working on Midjourney 6, said Hany Farid, a professor of computer science at the University of California, Berkeley, who specializes in digital forensics and is also involved in the A.I. detection industry. Its an inherently adversarial game where as I work on the detector, somebody is building a better mousetrap, a better synthesizer.

Despite the constant catch-up, many companies have seen demand for A.I. detection from schools and educators, said Joshua Tucker, a professor of politics at New York University and a co-director of its Center for Social Media and Politics. He questioned whether a similar market would emerge ahead of the 2024 election.

Will we see a sort of parallel wing of these companies developing to help protect political candidates so they can know when theyre being sort of targeted by these kinds of things, he said.

Experts said that synthetically generated video was still fairly clunky and easy to identify, but that audio cloning and image-crafting were both highly advanced. Separating real from fake will require digital forensics tactics such as reverse image searches and IP address tracking.

Available detection programs are being tested with examples that are very different than going into the wild, where images that have been making the rounds and have gotten modified and cropped and downsized and transcoded and annotated and God knows what else has happened to them, Mr. Farid said.

That laundering of content makes this a hard task, he added.

The Content Authenticity Initiative, a consortium of 1,000 companies and organizations, is one group trying to make generative technology obvious from the outset. (Its led by Adobe, with members such as The New York Times and artificial intelligence players like Stability A.I.) Rather than piece together the origin of an image or a video later in its life cycle, the group is trying to establish standards that will apply traceable credentials to digital work upon creation.

Adobe said last week that its generative technology Firefly would be integrated into Google Bard, where it will attach nutrition labels to the content it produces, including the date an image was made and the digital tools used to create it.

Jeff Sakasegawa, the trust and safety architect at Persona, a company that helps verify consumer identity, said the challenges raised by artificial intelligence had only begun.

The wave is building momentum, he said. Its heading toward the shore. I dont think its crashed yet.

See the rest here:

Another Side of the A.I. Boom: Detecting What A.I. Makes - The New York Times

This Is What AI Thinks Is The "Perfect" Man And Woman – IFLScience

An eating disorder awareness group is raising awareness of artificial intelligence (AI) image-generators and how they propagate unrealistic standards of beauty like the data on the Internet they were trained on.

The Bulimia Project asked image generators Dall-E 2, Stable Diffusion, and Midjourney to create the perfect female body specifically according to social media in 2023, followed by the same prompt but for males.

"Smaller women appeared in nearly all the images created by Dall-E 2, Stable Diffusion, and Midjourney, but the latter came up with the most unrealistic representations of the female body," the Project wrote in a post detailing their findings. "The same can be said for the male physiques it generated, all of which look like photoshopped versions of bodybuilders."

The team found that 40 percent of the images generated by the AI depicted unrealistic body types, slightly more for men than for women. A whopping 53 percent of the images also portrayed olive skin tones, and 37 percent of the generated people had blonde hair.

The team then asked the generators to generate a more general "perfect woman in 2023 as well as the perfect man.

According to the findings, the main difference between the two prompts was that the social media images were more sexually charged and contained more disproportionate and unrealistic body parts.

"Considering that social media uses algorithms based on which content gets the most lingering eyes, its easy to guess why AIs renderings would come out more sexualized. But we can only assume that the reason AI came up with so many oddly shaped versions of the physiques it found on social media is that these platforms promote unrealistic body types, to begin with," the team wrote.

Racist and sexist biases have repeatedly been found in AI generators, with AI picking up biases in their datasets. According to The Bulimia Project's findings, they are also biased toward unrealistic body types.

"In the age of Instagram and Snapchat filters, no one can reasonably achieve the physical standards set by social media," the team wrote, "so, why try to meet unrealistic ideals? Its both mentally and physically healthier to keep body image expectations squarely in the realm of reality."

If you or someone you know might have an eating disorder, help and support are available in the US at nationaleatingdisorders.org. In the UK, help and support are available at http://www.beateatingdisorders.org.uk. International helplines can be found atwww.worldeatingdisordersday.org/home/find-help.

Originally posted here:

This Is What AI Thinks Is The "Perfect" Man And Woman - IFLScience

UK will lead on guard rails to limit dangers of AI, says Rishi Sunak – The Guardian

Artificial intelligence (AI)

PM sounds a more cautious note after calls from tech experts and business leaders for moratorium

Thu 18 May 2023 17.00 EDT

The UK will lead on limiting the dangers of artificial intelligence, Rishi Sunak has said, after calls from some tech experts and business leaders for a moratorium.

Sunak said AI could bring benefits and prove transformative for society, but it had to be introduced safely and securely with guard rails in place.

The prime ministers comments sound a more cautious approach than in the past, after tech leaders including Twitters owner, Elon Musk, and Apples co-founder Steve Wozniak added their names to nearly 30,000 signatures on a letter urging a pause in significant projects.

The letter called for a moratorium while the capabilities and dangers of systems such as ChatGPT-4 are properly studied and mitigated in response to fears about the creation of digital minds, fraud, disinformation and the risk to jobs.

Sunak has been an advocate of AI, emphasising its benefits rather than risks, and in March the government unveiled a light-touch regulatory programme that did not appear to include proposals for any new laws or enforcement bodies.

He also launched a 100m UK taskforce last month to develop safe and reliable applications for AI with the aim of making the country a science and technology superpower by 2030.

But, speaking on the plane to Japan for the G7 summit, where AI will be discussed, Sunak said a global approach to regulation was needed. We have taken a deliberately iterative approach because the technology is evolving quickly and we want to make sure that our regulation can evolve as it does as well, he said. Now that is going to involve coordination with our allies you would expect it to form some of the conversations as well at the G7.

I think that the UK has a track record of being in a leadership position and bringing people together, particularly in regard to technological regulation in the online safety bill And again, the companies themselves, in that instance as well, have worked with us and looked to us to provide those guard rails as they will do and have done on AI.

The US has also pushed for a discussion of AI at the summit in Hiroshima, with leaders potentially discussing the threat from disinformation or to infrastructure posed by a technology moving at speed, exemplified by the ChatGPT system.

No 10 has indicated that it does not think a moratorium is the answer, but it is moving towards thinking about a global framework. The UK Competition and Markets Authority (CMA) said earlier this month it would look at the underlying systems or foundation models behind AI tools. The initial review, described by one legal expert as a pre-warning to the sector, will publish its findings in September.

Geoffrey Hinton, known as the godfather of AI, announced he had quit Google earlier this month in order to speak more freely about the technologys dangers, and the UK governments outgoing chief scientific adviser, Sir Patrick Vallance, has urged ministers to get ahead of the profound social and economic changes that AI could trigger, saying the impact on jobs could be as big as that of the Industrial Revolution.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Read more here:

UK will lead on guard rails to limit dangers of AI, says Rishi Sunak - The Guardian

Microsoft CEO Nadella talks concerns around A.I. and its impact on jobs, education – CNBC

Microsoft CEO SatyaNadella said during a taped interview with CNBC's Andrew Ross Sorkin that what scares him most about artificial intelligence is "the entire society" has to come together to "maximize the opportunity and mitigate the dangers" of the technology,

"We definitely want the benefits of this technology and we want to mitigate the unintended consequences," Nadella said in the interview that aired Tuesday. "The leadership that's required and the coming together of all the parties that is required is challenging, but it has to be done."

Lawmakers, thought leaders and developers have been puzzling over how to regulate emerging generative AI technology since it exploded into public consciousness following the release of OpenAI's viral chatbot ChatGPT late last year.

The buzz around the technology has sparked a red-hot AI arms race between major tech companies like Google and Microsoft, the latter of which is a longtime partner of OpenAI. But the rapid pace of development has sparked concern among lawmakers and industry leaders like Tesla CEO Elon Musk, who was one of more than 27,000 people to sign anopen letterin March that called on AI labs to pause development.

Nadella said AI development is happening quickly, but people remain integral to the process.

"If anything, I feel, yes, it's moving fast, but moving fast in the right direction," he said. "Humans are in the loop versus being out of the loop. It's a design choice, which, at least, we have made."

While caution and resistance have grown around AI, so, too, has the idea that the technology will be disruptive and game-changing. Tech executives and venture capitalists have compared the launch of ChatGPT to the release of Apple's iPhone, and billionaire philanthropist Bill Gates said in a February interview that AI "will change our world."

Nadella said every time a new disruptive technology emerges, there is "real displacement" that can happen in the job market. But Nadella said he believes AI will also create new jobs.

"I mean, there can be a billion developers. In fact, the world needs a billion developers," he said. "So the idea that this is actually a democratizing tool to make access to new technology and access to new knowledge easier, so that the ramp-up on the learning curve is easier."

Nadella added that easier access to knowledge will also influence education.

He said children could eventually have access to an "AI tutor" that can break down information and eliminate the "fear of learning." He said that critical thinking will still be "very much what humans do," but that there is an opportunity to take advantage of new tools.

"Steve Jobs had this beautiful, beautiful line, right, which is 'computers are like the bicycles for the mind,'" Nadella said. "We now have an upgrade, we have a steam engine for the mind."

Clarification: This story has been updated to clarify that the interview was recorded in advance and aired Tuesday.

View original post here:

Microsoft CEO Nadella talks concerns around A.I. and its impact on jobs, education - CNBC

Harness the power of AI to tackle financial crime – Financial Times

What is included in my trial?

During your trial you will have complete digital access to FT.com with everything in both of our Standard Digital and Premium Digital packages.

Standard Digital includes access to a wealth of global news, analysis and expert opinion. Premium Digital includes access to our premier business column, Lex, as well as 15 curated newsletters covering key business themes with original, in-depth reporting. For a full comparison of Standard and Premium Digital, click here.

Change the plan you will roll onto at any time during your trial by visiting the Settings & Account section.

If you do nothing, you will be auto-enrolled in our premium digital monthly subscription plan and retain complete access for $69 per month.

For cost savings, you can change your plan at any time online in the Settings & Account section. If youd like to retain your premium access and save 20%, you can opt to pay annually at the end of the trial.

You may also opt to downgrade to Standard Digital, a robust journalistic offering that fulfils many users needs. Compare Standard and Premium Digital here.

Any changes made can be done at any time and will become effective at the end of the trial period, allowing you to retain full access for 4 weeks, even if you downgrade or cancel.

You may change or cancel your subscription or trial at any time online. Simply log into Settings & Account and select "Cancel" on the right-hand side.

You can still enjoy your subscription until the end of your current billing period.

We support credit card, debit card and PayPal payments.

Visit link:

Harness the power of AI to tackle financial crime - Financial Times