Archive for the ‘Artificial Intelligence’ Category

David Williamson Shaffer shares expertise on artificial intelligence in … – UW-Madison

April 28, 2023

David Williamson Shaffer recently lent his expertise on artificial intelligence to news reports featured on two Wisconsin television stations.

Shaffer is the Sears Bascom Professor of Learning Analytics and the Vilas Distinguished Achievement Professor of Learning Sciences at the UWMadison School of Education and a Data Philosopher at the Wisconsin Center for Education Research.

In a story aired on WKOW in Madison, Shaffer argued schools shouldnt ban AI tools like ChatGPT, but instead figure out how to teach students to use the tools appropriately. He said he could envision AI becoming commonplace in educational environments in the future.

Its wrong to ban ChatGPT, he said. Because students are going to need to know how to use these technologies correctly, theyre going to need to know how to use them without plagiarizing and theyre going to need to know how to use them to ask the right questions.

Shaffer recently outlined this argument in an op-ed published in Newsweek.

In a story aired on WAOW in Wausau, Shaffer explained and weighed in on a new AI feature rolled out on the social media platform Snapchat. Some, including law enforcement, have raised concerns about the new features ability to spread incorrect or harmful information or violate the privacy rights of minors.

Shaffer said parents can and should play an important role in helping their children navigate the ever-changing social media landscape.

In the same way that you dont follow your kids around when they go out with their friends in the evening, but you talk with them about what they did and talk about what some of the dangers are, you assess their level of responsibility, he said in the interview.

The full WKOW story is available here.

The full WAOW story is available here.

Read more here:
David Williamson Shaffer shares expertise on artificial intelligence in ... - UW-Madison

Scientists use brain scans and AI to ‘decode’ thoughts – Economic Times

Scientists said Monday they have found a way to use brain scans and artificial intelligence modelling to transcribe "the gist" of what people are thinking, in what was described as a step towards mind reading. While the main goal of the language decoder is to help people who have the lost the ability to communicate, the US scientists acknowledged that the technology raised questions about "mental privacy".

Aiming to assuage such fears, they ran tests showing that their decoder could not be used on anyone who had not allowed it to be trained on their brain activity over long hours inside a functional magnetic resonance imaging (fMRI) scanner.

Alexander Huth, a neuroscientist at the University of Texas at Austin and co-author of a new study, said that his team's language decoder "works at a very different level".

It is the first system to be able to reconstruct continuous language without an invasive brain implant, according to the study in the journal Nature Neuroscience.

This allowed the researchers to map out how words, phrases and meanings prompted responses in the regions of the brain known to process language.

The model was trained to predict how each person's brain would respond to perceived speech, then narrow down the options until it found the closest response.

The study's first author Jerry Tang said the decoder could "recover the gist of what the user was hearing".

For example, when the participant heard the phrase "I don't have my driver's license yet", the model came back with "she has not even started to learn to drive yet".

The decoder struggled with personal pronouns such as "I" or "she," the researchers admitted.

But even when the participants thought up their own stories -- or viewed silent movies -- the decoder was still able to grasp the "gist," they said.

This showed that "we are decoding something that is deeper than language, then converting it into language," Huth said.

Because fMRI scanning is too slow to capture individual words, it collects a "mishmash, an agglomeration of information over a few seconds," Huth said.

"So we can see how the idea evolves, even though the exact words get lost."

- Ethical warning - David Rodriguez-Arias Vailhen, a bioethics professor at Spain's Granada University not involved in the research, said it went beyond what had been achieved by previous brain-computer interfaces.

This brings us closer to a future in which machines are "able to read minds and transcribe thought," he said, warning this could possibly take place against people's will, such as when they are sleeping.

The researchers anticipated such concerns.

They ran tests showing that the decoder did not work on a person if it had not already been trained on their own particular brain activity.

The three participants were also able to easily foil the decoder.

While listening to one of the podcasts, the users were told to count by sevens, name and imagine animals or tell a different story in their mind. All these tactics "sabotaged" the decoder, the researchers said.

Next, the team hopes to speed up the process so that they can decode the brain scans in real time.

They also called for regulations to protect mental privacy.

"Our mind has so far been the guardian of our privacy," said bioethicist Rodriguez-Arias Vailhen.

"This discovery could be a first step towards compromising that freedom in the future."

See original here:
Scientists use brain scans and AI to 'decode' thoughts - Economic Times

Opinion: Artificial intelligence is the future of hiring – The San Diego Union-Tribune

Cooper is a professor of law at California Western School of Law and a research fellow at Singapore University of Social Sciences. He lives in San Diego. Kompella is CEO of industry analyst firm RPA2AI Research and visiting professor for artificial intelligence at the BITS School of Management, Mumbai, and lives in Bangalore, India.

Hiring is the lifeblood of the economy. In 2022, there were 77 million hires in the United States, according to the U.S. Department of Labor. Artificial intelligence is expected to make this hiring process more efficient and more equitable. Despite such lofty goals, there are valid concerns that using AI can lead to discrimination. Meanwhile, the use of AI in the hiring process is widespread and growing by leaps and bounds.

A Society of Human Resources Management survey last year showed that about 80 percent of employers use AI for hiring. And there is good reason for the assist: Hiring is a high-stakes decision for the individual involved and the businesses looking to employ talent. It is no secret, though, that the hiring process can be inefficient and subject to human biases.

AI offers many potential benefits. Consider that human resources teams spend only 7 seconds skimming a resume, a document which is itself a one-dimensional portrait of a candidate. Recruiters instead end up spending more of their time on routine tasks like scheduling interviews. By using AI to automate such routine tasks, human resources teams can spend more quality time on assessing candidates. AI tools can also use a wider range of data points about candidates that can result in a more holistic assessment and lead to a better match. Research shows that the overly masculine language used in job descriptions puts off women from applying. AI can be used to create job descriptions and ads that are more inclusive.

But using AI for hiring decisions can also lead to discrimination. A majority of recruiters in the 2022 Society of Human Resources Management survey identified flaws in their AI systems. For example, they excluded qualified applicants or had a lack of transparency around the way in which the algorithms work. There is also disparate impact (also known as unintentional discrimination) to consider. According to University of Southern California research in 2021, job advertisements are not shown to women despite them being qualified for the roles being advertised. Also, advertisements for high-paying jobs are often hidden from women. Many states suffer a gender pay gap. When the advertisements themselves are invisible, the pay equity gap is likely not going to solve itself, even with the use of artificial intelligence.

Discrimination, even in light of new technologies, is still discrimination. New York City has fashioned a response by enacting Local Law 144, scheduled to come into effect on July 15. This law requires employers to provide notice to applicants when AI is being used to assess their candidacy. AI systems are subject to annual independent third-party audits and audit results must be displayed publicly. Independent audits of such high-stakes AI usage is a welcome move by New York City.

California, long considered a technology bellwether, has been off to a slow start. The California Workplace Technology Accountability Act, a bill that focused on employee data privacy, is now dead. On the anvil are updates to Chapter 5 (Discrimination in Employment) of the California Fair Employment and Housing Act. Initiated a year ago by the Fair Employment and Housing Council (now called the Civil Rights Department), these remain a work in progress. These are not new regulations per se but an update of existing anti-discrimination provisions. The proposed draft is open for public comments but there is no implementation timeline yet. The guidance for compliance, the veritable dos and donts, including penalties for violations, are all awaited. There is also a recently introduced bill in the California Legislature that seeks to regulate the use of AI in business, including education, health care, housing and utilities, in addition to employment.

The issue is gaining attention globally. Among state laws on AI in hiring is one in Illinois that regulates AI tools used for video interviews. At the federal level, the Equal Employment Opportunity Commission has updated guidance on employer responsibilities. And internationally, the European Unions upcoming Artificial Intelligence Act classifies such AI as high-risk and prescribes stringent usage rules.

Adoption of AI can help counterbalance human biases and reduce discrimination in hiring. But the AI tools used must be transparent, explainable and fair. It is not easy to devise regulations for emerging technologies, particularly for a fast-moving one like artificial intelligence. Regulations need to prevent harm but not stifle innovation. Clear regulation coupled with education, guidance and practical pathways to compliance strikes that balance.

Read the original here:
Opinion: Artificial intelligence is the future of hiring - The San Diego Union-Tribune

Artificial intelligence: Powerful AI systems ‘can’t be controlled’ and ‘are causing harm’, says UK expert – Sky News

Sunday 30 April 2023 16:04, UK

A British scientist known for his contributions to artificial intelligence has told Sky News that powerful AI systems "can't be controlled" and "are already causing harm".

Professor Stuart Russell was one of more than 1,000 experts who last month signed an open letter calling for a six-month pause in the development of systems even more capable than OpenAI's newly-launched GPT-4 - the successor to its online chatbot ChatGPT which is powered by GPT-3.5.

The headline feature of the new model is its ability to recognise and explain images.

Speaking to Sky's Sophy Ridge, Professor Russell said of the letter: "I signed it because I think it needs to be said that we don't understand how these [more powerful] systems work. We don't know what they're capable of. And that means that we can't control them, we can't get them to behave themselves."

He said that "people were concerned about disinformation, about racial and gender bias in the outputs of these systems".

And he argued with the swift progression of AI, time was needed to "develop the regulations that will make sure that the systems are beneficial to people rather than harmful".

He said one of the biggest concerns was disinformation and deep fakes (videos or photos of a person in which their face or body has been digitally altered so they appear to be someone else - typically used maliciously or to spread false information).

He said even though disinformation has been around for a long time for "propaganda" purposes, the difference now is that, using Sophy Ridge as an example, he could ask GPT-4 to try to "manipulate" her so she's "less supportive of Ukraine".

He said the technology would read Ridge's social media presence and what she has ever said or written, and then carry out a gradual campaign to "adjust" her news feed.

Professor Russell told Ridge: "The difference here is I can now ask GPT-4 to read all about Sophy Ridge's social media presence, everything Sophy Ridge has ever said or written, all about Sophy Ridge's friends and then just begin a campaign gradually by adjusting your news feed, maybe occasionally sending some fake news along into your news feed so that you're a little bit less supportive of Ukraine, and you start pushing harder on politicians who say we should support Ukraine in the war against Russia and so on.

"That will be very easy to do. And the really scary thing is that we could do that to a million different people before lunch."

The expert, who is a professor of computer science at the University of California, Berkeley, warned of "a huge impact with these systems for the worse by manipulating people in ways that they don't even realise is happening".

Ridge described it as "genuinely really scary" and asked if that kind of thing was happening now, to which the professor replied: "Quite likely, yes."

He said China, Russia and North Korea have large teams who "pump out disinformation" and with AI "we've given them a power tool".

"The concern of the letter is really about the next generation of the system. Right now the systems have some limitations in their ability to construct complicated plans."

This is a limited version of the story so unfortunately this content is not available. Open the full version

Read more:What is GPT-4 and how does it improve upon ChatGPT?Elon Musk reveals plan to build 'TruthGPT' despite warning of AI dangers

He suggested under the next generation of systems, or the one after that, corporations could be run by AI systems. "You could see military campaigns being organised by AI systems," he added.

"If you're building systems that are more powerful than human beings, how do human beings keep power over those systems forever? That's the real concern behind the open letter."

Click to subscribe to the Sophy Ridge on Sunday podcast

The professor said he was trying to convince governments of the need to start planning ahead for when "we need to change the way our whole digital ecosystem... works."

Since it was released last year, Microsoft-backed OpenAI's ChatGPT has prompted rivals to accelerate the development of similar large language models and encouraged companies to integrate generative AI models into their products.

UK unveils proposals for 'light touch' regulations around AI

It comes as the UK government recently unveiled proposals for a "light touch" regulatory framework around AI.

The government's approach, outlined in a policy paper, would split the responsibility for governing AI between its regulators for human rights, health and safety, and competition, rather than create a new body dedicated to the technology.

Follow this link:
Artificial intelligence: Powerful AI systems 'can't be controlled' and 'are causing harm', says UK expert - Sky News

Artificial intelligence to help with Gaelic subtitles – Scottish Field

ARTIFICIAL intelligence (AI) is being used to create a Gaelic subtitle service that could be used by the BBC.

Linguists and AI researchers from Edinburgh and Glasgow universities have been awarded 225,000 by the Scottish Government to develop the system.

The funding will also help the team to begin creating a large language model which is described as being similar to ChatGPT for Gaelic.

Resources being fed into the AI system include 15,000 pages of transcribed Gaelic narrative from the School of Scottish Studies Archives, which is based at the University of Edinburgh.

The AI will also be trained using material from the Digital Archive of Scottish Gaelic (DASG), including some 30 million words from the University of Glasgows Corpas na Gidhlig and vernacular recordings from the DASGs Cluas ri Claisneachd audio archive.

William Lamb, professor of Gaelic ethnology and linguistics at the University of Edinburgh, said: This is about compiling large amounts of knowledge gleaned from Gaelic speakers in the past and returning it to Gaelic speakers, in various forms, in the present.

Roibeard Maolalaigh, professor of Gaelic at the University of Glasgow, added: This will add substantially to the development of language technology for Gaelic.

It is gratifying that DASGs resources are being deployed in this way and being further developed.

Read more news and reviews on Scottish Fields culture pages.

Plus, dont miss the May issue of Scottish Field magazine.

Original post:
Artificial intelligence to help with Gaelic subtitles - Scottish Field