Archive for the ‘Artificial Intelligence’ Category

UK and US intervene amid AI industrys rapid advances – The Guardian

Artificial intelligence (AI)

Competition and Markets Authority sends pre-warning to sector, while White House announces measures to address risks

The UK and US have intervened in the race to develop ever more powerful artificial intelligence technology, as the British competition watchdog launched a review of the sector and the White House advised tech firms of their fundamental responsibility to develop safe products.

Regulators are under mounting pressure to intervene, as the emergence of AI-powered language generators such as ChatGPT raises concerns about the potential spread of misinformation, a rise in fraud and the impact on the jobs market, with Elon Musk among nearly 30,000 signatories to a letter published last month urging a pause in significant projects.

The UK Competition and Markets Authority (CMA) said on Thursday it would look at the underlying systems or foundation models behind AI tools. The initial review, described by one legal expert as a pre-warning to the sector, will publish its findings in September.

On the same day, the US government announced measures to address the risks in AI development, as Kamala Harris, the vice-president, met chief executives at the forefront of the industrys rapid advances. In a statement, the White House said firms developing the technology had a fundamental responsibility to make sure their products are safe before they are deployed or made public.

The meeting capped a week during which a succession of scientists and business leaders issued warnings about the speed at which the technology could disrupt established industries. On Monday, Geoffrey Hinton, the godfather of AI, quit Google in order to speak more freely about the technologys dangers, while the UK governments outgoing scientific adviser, Sir Patrick Vallance, urged ministers to get ahead of the profound social and economic changes that could be triggered by AI, saying the impact on jobs could be as big as that of the Industrial Revolution.

Sarah Cardell said AI had the potential to transform the way businesses competed, but that consumers must be protected.

The CMA chief executive said: AI has burst into the public consciousness over the past few months but has been on our radar for some time. Its crucial that the potential benefits of this transformative technology are readily accessible to UK businesses and consumers while people remain protected from issues like false or misleading information.

ChatGPT and Googles rival Bard service are prone to delivering false information in response to users prompts, while concerns have been raised about AI-generated voice scams. The anti-misinformation outfit NewsGuard said this week that chatbots pretending to be journalists were running almost 50 AI-generated content farms. Last month, a song featuring fake AI-generated vocals purporting to be Drake and the Weeknd was pulled from streaming services.

The CMA review will look at how the markets for foundation models could evolve, what opportunities and risks there are for consumers and competition, and formulate guiding principles to support competition and protect consumers.

The leading players in AI are Microsoft, ChatGPT developer OpenAI in which Microsoft is an investor and Google parent Alphabet, which owns a world-leading AI business in UK-based DeepMind, while leading AI startups include Anthropic and Stability AI, the British company behind Stable Diffusion.

Alex Haffner, competition partner at the UK law firm Fladgate, said: Given the direction of regulatory travel at the moment and the fact the CMA is deciding to dedicate resource to this area, its announcement must be seen as some form of pre-warning about aggressive development of AI programmes without due scrutiny being applied.

In the US, Harris met the chief executives of OpenAI, Alphabet and Microsoft at the White House, and outlined measures to address the risks of unchecked AI development. In a statement following the meeting, Harris said she told the executives that the private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products.

The administration said it would invest $140m (111m) in seven new national AI research institutes, to pursue artificial intelligence advances that are ethical, trustworthy, responsible, and serve the public good. AI development is dominated by the private sector, with the tech industry producing 32 significant machine-learning models last year, compared with three produced by academia.

Leading AI developers have also agreed to their systems being publicly evaluated at this years Defcon 31 cybersecurity conference. Companies that have agreed to participate include OpenAI, Google, Microsoft and Stability AI.

This independent exercise will provide critical information to researchers and the public about the impacts of these models, said the White House.

Robert Weissman, the president of the consumer rights non-profit Public Citizen, praised the White Houses announcement as a useful step but said more aggressive action is needed. Weissman said this should including a moratorium on the deployment of new generative AI technologies, the term for tools such as ChatGPT and Stable Diffusion.

At this point, Big Tech companies need to be saved from themselves. The companies and their top AI developers are well aware of the risks posed by generative AI. But they are in a competitive arms race and each believes themselves unable to slow down, he said.

The EU was also told on Thursday that it must protect grassroots AI research or risk handing control of the technologys development to US firms.

In an open letter coordinated by the German research group Laion or Large-scale AI Open Network the European parliament was told that one-size-fits-all rules risked eliminating open research and development.

Rules that require a researcher or developer to monitor or control downstream use could make it impossible to release open-source AI in Europe, which would entrench large firms and hamper efforts to improve transparency, reduce competition, limit academic freedom, and drive investment in AI overseas, the letter said.

Europe cannot afford to lose AI sovereignty. Eliminating open-source R&D will leave the European scientific community and economy critically dependent on a handful of foreign and proprietary firms for essential AI infrastructure.

The largest AI efforts, by companies such as OpenAI and Google, are heavily controlled by their creators. It is impossible to download the model behind ChatGPT, for instance, and the paid-for access that OpenAI provides to customers comes with a number of legal and technical restrictions on how it can be used. By contrast, open-source efforts involve creating a model and then releasing it for anyone to use, improve or adapt as they see fit.

We are working on open-source AI because we think that sort of AI will be more safe, more accessible and more democratic, said Christoph Schuhmann, the organisational lead at Laion.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

See the original post here:
UK and US intervene amid AI industrys rapid advances - The Guardian

Did Stephen Hawking Warn Artificial Intelligence Could Spell the … – Snopes.com

Image Via Image Via Sion Touhig/Getty Images")}else if(is_tablet()) {document.write("")}

On May 1, 2023, the New York Post ran a story saying that British theoretical physicist Stephen Hawking had warned that the development of artificial intelligence (AI) could mean "the end of the human race."

Hawking, who died in 2018, had indeed said so in an interviewwith the BBC in 2014.

"The development of full artificial intelligence could spell the end of the human race," Hawking said during the interview. "Once humans develop artificial intelligence, it would take off on its own and re-design itself at an ever-increasing rate."

Another story, from CNBC in 2017, relayed a similar warning about AI from the physicist. It came from Hawking's speech at the Web Summit technology conference in Lisbon, Portugal, according to CNBC. Hawking reportedly said:

Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy.

Such warnings became more common in 2023. In March, tech leaders, scientists, and entrepreneurs warned about the dangers posed by AI creations, like ChatGPT, to humanity.

"AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs," they wrote in an open letter published by the Future of Life Institute, a nonprofit. The letter garnered over 27,500 signatures as of this writing in early May 2023. Among the signatories were CEO of SpaceX, Tesla, and Twitter Elon Musk, Apple co-founder Steve Wozniak, and Pinterest co-founder Evan Sharp.

In addition, Snopes and other fact-checking organizations noted a dramatic uptick in misinformation conveyed on social media via AI-generated contentin 2022 and 2023.

Then, on May 2, long-time researcher at Google, Geoffrey Hinton, quit the technology behemoth to sound the alarm about AI products. Hinton, known as "Godfather of AI," told MIT Technology Review that chatbots like GPT-4 that OpenAI, an AI lab "are on track to be a lot smarter than he thought they'd be."

Given that Hawking was indeed documented as warning about the potential for AI to "spell the end of the human race," we rate this quote as correctly attributed to him.

"Geoffrey Hinton Tells Us Why He's Now Scared of the Tech He Helped Build." MIT Technology Review, https://www.technologyreview.com/2023/05/02/1072528/geoffrey-hinton-google-why-scared-ai/. Accessed 3 May 2023.

"'Godfather of AI' Leaves Google, Warns of Tech's Dangers." AP NEWS, 2 May 2023, https://apnews.com/article/ai-godfather-google-geoffery-hinton-fa98c6a6fddab1d7c27560f6fcbad0ad.

"Pause Giant AI Experiments: An Open Letter." Future of Life Institute, https://futureoflife.org/open-letter/pause-giant-ai-experiments/. Accessed 3 May 2023.

Stephen Hawking Says AI Could Be "worst Event" in Civilization. 6 Nov. 2017, https://web.archive.org/web/20171106191334/https://www.cnbc.com/2017/11/06/stephen-hawking-ai-could-be-worst-event-in-civilization.html.

Stephen Hawking Warned AI Could Mean the "End of the Human Race." 3 May 2023, https://web.archive.org/web/20230503162420/https://nypost.com/2023/05/01/stephen-hawking-warned-ai-could-mean-the-end-of-the-human-race/.

"Stephen Hawking Warns Artificial Intelligence Could End Mankind." BBC News, 2 Dec. 2014. http://www.bbc.com, https://www.bbc.com/news/technology-30290540.

Damakant Jayshi is a fact-checker for Snopes, based in Atlanta.

Read this article:
Did Stephen Hawking Warn Artificial Intelligence Could Spell the ... - Snopes.com

Artificial Intelligence and Jobs: Whos at Risk – Barron’s

Since the release of ChatGPT, companies have scrambled to understand how generative artificial intelligence will affect jobs. This past week, IBM CEO Arvind Krishna said the company will pause hiring for roles that could be replaced by AIaffecting as much as 30% of back-office jobs over five years. And Chegg , which provides homework help and online tutoring, saw its stock lose half of its value after warning of slower growth as students turned to ChatGPT.

A recent study by a team of professors from Princeton University, the University of Pennsylvania, and New York University analyzed how generative AI relates to 52 human abilities. The researchers then calculated AI exposure for occupations. (Exposure doesnt necessarily mean job loss.) Among high-exposure jobs, a few are obvioustelemarketers, HR specialists, loan officers, and law clerks. More surprising: Eight of the top 10 are humanities professors.

In a survey from customer-service software firm Tidio, 64% of respondents thought chatbots, robots, or AI can replace teachers, though many believe that empathy and listening skills may be tough to replicate. A survey from the Walton Family Foundation found that within two months of ChatGPTs introduction, 51% of teachers tapped it for lesson planning and creative ideas. Some 40% said they used it at least once a week, compared with 22% of students.

AI isnt just knocking on the door; its already inside. Language-learning app Duolingo has been using AI since 2020. Even Chegg unveiled an AI learning service called CheggMate using OpenAIs GPT-4. Still, Morgan Stanley analyst Josh Baer wrote that its highly unlikely that CheggMate can insulate the company from AI.

Write to Evie Liu at evie.liu@barrons.com

Advertisement - Scroll to Continue

Devon Energy , KKR , McKesson , PayPal Holdings , and Tyson Foods release earnings.

Airbnb , Air Products & Chemicals , Apollo Global Management , Duke Energy , Electronic Arts , Occidental Petroleum , and TransDigm Group report quarterly results.

The National Federation of Independent Business releases its Small Business Optimism Index for April. Consensus estimate is for a 90 reading, roughly even with the March figure. The index has had 15 consecutive readings below the 49-year average of 98 as inflation and a tight labor market remain top of mind for small-business owners.

Walt Disney

Advertisement - Scroll to Continue

Brookfield Asset Management , Roblox , Toyota Motor , and Trade Desk release earnings.

The Bureau of Labor Statistics releases the consumer price index for April. Economists forecast a 5% year-over-year increase, matching the March data. The core CPI, which excludes volatile food and energy prices, is expected to rise 5.4%, two-tenths of a percentage point less than previously. Both indexes are well below their peaks from last year but also much higher than the Federal Reserves 2% target.

Honda Motor , JD.com , PerkinElmer , and Tapestry hold conference calls to discuss quarterly results.

Advertisement - Scroll to Continue

The Bank of England announces its monetary-policy decision. The central bank is widely expected to raise its bank rate by a quarter of a percentage point, to 4.5%. The United Kingdoms CPI rose 10.1% in March from the year prior, making it the only Western European country with a double-digit rate of inflation.

Advertisement - Scroll to Continue

The Department of Labor reports initial jobless claims for the week ending on May 6. Claims averaged 239,250 in April, returning to historical averages after a prolonged period of being below trend, signaling a loosening of a very tight labor market.

The BLS releases the producer price index for April. The consensus call is for the PPI to increase 2.4% and the core PPI to rise 3.3%. This compares with gains of 2.7% and 3.4%, respectively, in March. The PPI and core PPI are at their lowest levels in about two years.

The University of Michigan releases its Consumer Sentiment Index for May. Economists forecast a dour 62.6 reading, about one point lower than in April. Consumers year-ahead inflation expectations surprisingly jumped by a percentage point in April to 4.6%.

The rest is here:
Artificial Intelligence and Jobs: Whos at Risk - Barron's

Artificial intelligence helping detect early signs of breast cancer in some US hospitals – FOX 9 Minneapolis-St. Paul

Loading Video

This browser does not support the Video element.

October raises awareness for Breast Cancer and LiveNOW from FOX talks with a doctor about the advances in treatments and importance of early detection.

BOCA RATON, Fla. - Some doctors believe artificial intelligence is saving lives after a major advancement in breast cancer screenings. In some cases, AI is detecting early signs of the disease years before the tumor would be visible on a traditional scan.

The Christine E. Lynn Women's Health and Wellness Institute at the Boca Raton Regional Hospital found a 23% increase in cancer cases since implementing AI during breast cancer screenings.

Dr. Kathy Schilling, the medical director at the institute, told Fox News Digital the practice has nine dedicated breast radiologists who are all fellowship trained, so the increase in early detections was surprising.

"All we do is read breast imaging studies, and so I thought, you know, we were probably pretty good at what we were doing, but this study really comes in shows us that even the dedicated and committed breast radiologists can do better utilizing artificial intelligence," Schilling said.

CHAT GPT ANSWERED 25 BREAST CANCER SCREENING QUESTIONS , BUT 'IT'S NOT READY FOR THE REAL WORLD'-HERE'S WHY

"ProFound AI," created by iCad, is designed to flag problem areas on mammograms. The program studied millions of breast cancer scans and, over time, learned to circle lesions and estimate the cancer risk.

"If you realize that 90% of the cases are benign and have no findings, you know, you just become fatigued. You get mesmerized by scrolling through the images. The AI helps us to refocus and find those little tiny cancers that we're looking for," Schilling said.

Medical personnel use a mammogram to examine a woman's breast for breast cancer. Photo: Hannibal Hanschke/dpa (Photo by Michael Hanschke/picture alliance via Getty Images)

ProFound AI became the first technology of its kind to be FDA cleared in December 2018. The Christine E. Lynn Women's Health and Wellness Institute adopted the groundbreaking technology during the COVID-19 pandemic, and the hospital now boasts one of the earliest studies on AI's impact on cancer.

"What I think we're going to be finding is that we're finding cancers when they're three to six millimeters in size, and finding the invasive lobular cancers which are very difficult for us to find, because they don't form masses in the breast," Schilling said.

Schilling also stated that over the past two years, the institute has offered less severe therapies to patients diagnosed with breast cancer because the cells are so small.

"We are doing smaller lumpectomies, fewer mastectomies, less chemotherapy, less radiation therapy," she continued. "I think we're entering into a whole new era in breast care."

ARTIFICIAL INTELLIGENCE IN HEALTH CARE: NEW PRODUCT ACTS AS COPILOT FOR DOCTORS'

Schilling also believes AI's early detection capabilities may have helped save Luz Torres' life after a routine mammogram on April 1 revealed a small cancerous tumor. Torres said she had no symptoms or inclination that something could be wrong.

"I have very dense breast tissue, so I always have a mammography and an ultrasound. The recommendation of that visit was the breast biopsy, so I had that done within a week's time, and then I got a phone call that the pathology was breast cancer," Torres said in an emotional interview. "It was an early detection. I come every year, I'm on track with my mammography, so it's very small tumor."

RELATED: New FDA rule requires info on breast density with all mammograms

Torres was diagnosed with stage 1 breast cancer in early April and recently completed surgery. Fortunately, she is expected to make a full recovery after early detection.

"It looks good. Because it was called early stage 1, I won't need chemotherapy so very happy about that," said Torres, who described the institute as "amazing."

Loading Video

This browser does not support the Video element.

Dr. Ko Un Park, a surgical oncologist at OSUs Comprehensive Cancer Center, discusses the signs of inflammatory breast cancer, treatment, and other things to know about the rare, yet deadly form of the disease.

"The desire to improve the technology for the patients to find this breast cancer in patients early when it's treatable, and the prognosis ends up being great. I'm fortunate enough to be one of those patients. It's a blessing," she concluded.

Several companies have released AI products with the ability to flag abnormalities during cancer screenings. Doctors are also using AI to detect brain cancer, lung cancer and prostate cancer.

Find more updates on this story at FOXNews.com.

Link:
Artificial intelligence helping detect early signs of breast cancer in some US hospitals - FOX 9 Minneapolis-St. Paul

Non-artificial intelligence – theday.com

Toshiyuki Shimada works with orchestra students at Norwich Free Academy last October as part of his candidacy process for director of the Eastern Connecticut Symphony Orchestra.

As the lights went down to start the final ECSO concert of the season, a family of four sidled in -- the way you do in a row of theater seats. Mom, Dad, brother, sister. Few children attend the nighttime concerts, but these kids were being treated to seats up front.

With the start of Mendelssohns Calm Sea and Prosperous Voyage, I could see why they chose that row. The little girl, homemade baton in hand, was conducting the orchestra in her seat.

I wish the actual Eastern Connecticut Symphony Orchestra conductor, Toshiyuki Shimada, could have seen what was happening behind his back. The child was precisely copying his moves. When his arm stretched, so did hers a bare quarter-beat behind. When he pointed to the horns, she did, too, even pacing the rapid but almost invisible agitation of the baton. Me, Im grinning in the dusk.

The child conductor literally did not miss a beat. Her musical intelligence is one of her superpowers.

Behavioral science recognizes multiple categories of human intelligence. Best known is the list of intelligences identified by Howard Gardner: linguistic, logical/mathematical, spatial, bodily-kinesthetic, musical, interpersonal, intrapersonal, and naturalist. Gardner added that humans are not born with all the intelligence they will ever have. Some they acquire.

On the same weekend as the concert, The New York Times Magazine published its powerful interview with three Connecticut State Police investigators who went above and beyond their duties in documenting the carnage inside the Sandy Hook Elementary School in December 2012.

The article describes the trauma Sandy Hook inflicted even on veteran officers who have dealt with numerous homicide scenes. It recounts their determination to do for each of the 26 victims what they were used to doing for one at a time. No corners would be cut, even though no trial was likely; the shooter was dead and appeared to have acted alone.

The story lays out the steps the investigative team took to shield anyone who did not have to look at a scene that can never be unseen. It details their decision after the bodies of 20 child victims and their teachers had been removed that they would show then-Attorney General Eric Holder a dozen of the 1,495 photos they took and then escort him to the classrooms through hallways still littered with glass shards.

Times writer Jay Kirk wrote, that something like destiny, however grim and profoundly unwanted, had been laid at their feet. That the country, the world, would come looking for answers was not a question. And if anyone was going to provide the answers, at least to what had happened in these rooms, it would be up to them, but only if they kept their heads.

They did their work with a rare but critical combination of the logical/mathematical, spatial, interpersonal and intrapersonal human intelligences developed over their years as investigators. When told they could skip some steps because evidence was not needed for prosecution, their interpersonal intelligence told them no, the victims and their families deserve all we can do. Their intrapersonal intelligence warned each of them of the toll it was taking, but character and a sense of duty kept them going.

Their mission was to protect people from crippling horror but at the same time not to shield the public from the obscene truth of what one heavily armed intruder had done. Putting their multi-faceted intelligence at the service of a greater good, the investigative team provided lawmakers, law enforcement and justice officials with evidence that led to rapid changes in Connecticut law and -- not until much more bloodshed -- to a federal law that upended the stalemate on gun control measures.

Recent news about intelligence-related topics has largely focused on the expansion of Artificial Intelligence and its capacity for increasing its own scope. Journalist Scott Pelley observed on CBSs Sixty Minutes that he was speechless, rare for him, about the tasks AI was conceiving for itself and then carrying out. Is AI sentient? he asked. Does it have self awareness?

Programmers obviously can endow AI with linguistic, logical/mathematical, spatial and musical intelligence. Pelleys report includes two bots teaching themselves to play soccer, so some form of bodily-kinesthetic and inter-personal intelligence is involved. It would be ironic, but perfectly possible, for AI to have naturalist intelligence.

Intrapersonal intelligence, however, depends on self-awareness. If AI ever becomes self-aware, will it be inclined to put others good before its own? Always? Sometimes? Never? Would it undertake a task like identifying, cleansing and returning victims jewlery to their families with empathy or just expediency?

Will AI have joy, like the thrill of conducting scores of musicians right in front of you who dont even know youre there? Will it be able to pretend?

We have a lot to learn about intelligence.

Lisa McGinley is a member of The Day Editorial Board.

Go here to see the original:
Non-artificial intelligence - theday.com