Archive for the ‘Artificial Intelligence’ Category

Researchers Identify 6 Challenges Humans Face With Artificial Intelligence – Neuroscience News

Summary: Study identifies six factors humans must overcome to insure artificial intelligence is trustworthy, safe, reliable, and compatible with human values.

Source: University of Central Florida

A University of Central Florida professor and 26 other researchers have published a study identifying the challenges humans must overcome to ensure that artificial intelligence is reliable, safe, trustworthy and compatible with human values.

The study,Six Human-Centered Artificial Intelligence Grand Challenges, was published in theInternational Journal of Human-ComputerInteraction.

Ozlem Garibay 01MS 08PhD, an assistant professor in UCFsDepartment of Industrial Engineering and Management Systems, was the lead researcher for the study. She says that the technology has become more prominent in many aspects of our lives, but it also has brought about many challenges that must be studied.

For instance, the coming widespread integration of artificial intelligence could significantly impact human life in ways that are not yet fully understood, says Garibay, who works on AI applications in material anddrug design and discovery, and how AI impacts social systems.

The six challenges Garibay and the team of researchers identified are:

The study, which was conducted over 20 months, comprises the views of 26 international experts who have diverse backgrounds in AI technology.

These challenges call for the creation of human-centered artificial intelligence technologies that prioritize ethicality, fairness and the enhancement of human well-being, Garibay says.

The challenges urge the adoption of a human-centered approach that includes responsible design, privacy protection, adherence to human-centered design principles, appropriate governance and oversight, and respectful interaction with human cognitive capacities.

Overall, these challenges are a call to action for the scientific community to develop and implement artificial intelligence technologies that prioritize and benefit humanity, she says.

The group of 26 experts include National Academy of Engineering members and researchers from North America, Europe and Asia who have broad experiences across academia, industry and government. The group also has diverse educational backgrounds in areas ranging from computer science and engineering to psychology and medicine.

Their work also will be featured in a chapter in the book, Human-Computer Interaction: Foundations, Methods, Technologies, and Applications.

Five UCF faculty members co-authored the study:

Garibay received her doctorate in computer science from UCF and joined UCFs Department of Industrial Engineering and Management Systems, part of theCollege of Engineering and Computer Science, in 2020.

Author: Robert WellsSource: University of Central FloridaContact: Robert Wells University of Central FloridaImage: The image is in the public domain

Original Research: Open access. Six Human-Centered Artificial Intelligence Grand Challenges by Ozlem Garibay et al. International Journal of Human-Computer Interaction

Abstract

Six Human-Centered Artificial Intelligence Grand Challenges

Widespread adoption of artificial intelligence (AI) technologies is substantially affecting the human condition in ways that are not yet well understood.

Negative unintended consequences abound including the perpetuation and exacerbation of societal inequalities and divisions via algorithmic decision making.

We present six grand challenges for the scientific community to create AI technologies that are human-centered, that is, ethical, fair, and enhance the human condition.

These grand challenges are the result of an international collaboration across academia, industry and government and represent the consensus views of a group of 26 experts in the field of human-centered artificial intelligence (HCAI).

In essence, these challenges advocate for a human-centered approach to AI that (1) is centered in human well-being, (2) is designed responsibly, (3) respects privacy, (4) follows human-centered design principles, (5) is subject to appropriate governance and oversight, and (6) interacts with individuals while respecting humans cognitive capacities.

We hope that these challenges and their associated research directions serve as a call for action to conduct research and development in AI that serves as a force multiplier towards more fair, equitable and sustainable societies.

See the original post:
Researchers Identify 6 Challenges Humans Face With Artificial Intelligence - Neuroscience News

Can artificial intelligence write fiction with real tension? – Financial Times

What is included in my trial?

During your trial you will have complete digital access to FT.com with everything in both of our Standard Digital and Premium Digital packages.

Standard Digital includes access to a wealth of global news, analysis and expert opinion. Premium Digital includes access to our premier business column, Lex, as well as 15 curated newsletters covering key business themes with original, in-depth reporting. For a full comparison of Standard and Premium Digital, click here.

Change the plan you will roll onto at any time during your trial by visiting the Settings & Account section.

If you do nothing, you will be auto-enrolled in our premium digital monthly subscription plan and retain complete access for $69 per month.

For cost savings, you can change your plan at any time online in the Settings & Account section. If youd like to retain your premium access and save 20%, you can opt to pay annually at the end of the trial.

You may also opt to downgrade to Standard Digital, a robust journalistic offering that fulfils many users needs. Compare Standard and Premium Digital here.

Any changes made can be done at any time and will become effective at the end of the trial period, allowing you to retain full access for 4 weeks, even if you downgrade or cancel.

You may change or cancel your subscription or trial at any time online. Simply log into Settings & Account and select "Cancel" on the right-hand side.

You can still enjoy your subscription until the end of your current billing period.

We support credit card, debit card and PayPal payments.

Read the original:
Can artificial intelligence write fiction with real tension? - Financial Times

ChatGPT in the Humanities Panel: Researchers Share Concerns … – Cornell University The Cornell Daily Sun

Does the next Aristotle, Emily Dickinson or Homer live on your computer? A group of panelists explored this idea in a talk titled Chat GPT and the Humanities on Friday in the A.D. White Houses Guerlac Room.

ChatGPTs ability to produce creative literature was one of the central topics explored in the talk as the discourse on the use of artificial intelligence software in academic spheres continues to grow.

In the panel, Prof. Morten Christiansen, psychology, Prof. Laurent Dubreuil, comparative literature, Pablo Contreras Kallens grad and Jacob Matthews grad explored the benefits and consequences of utilizing artificial intelligence within humanities research and education.

The forum was co-sponsored by the Society for the Humanities, the Humanities Lab and the New Frontier Grant program.

The Society for the Humanities was established in 1966 and connects visiting fellows, Cornell faculty and graduate students to conduct interdisciplinary research connected to an annual theme. This years focal theme is Repair which refers to the conservation, restoration and replication of objects, relations and histories.

All four panelists are members of the Humanities Lab, which works to provide an intellectual space for scholars to pursue research relating to the interaction between the sciences and the humanities. The lab was founded by Dubreuil in 2019 and is currently led by him.

Christiansen and Dubreuil also recently received New Frontier Grants for their project titled Poetry, AI and the Mind: A Humanities-Cognitive Science Transdisciplinary Exploration, which focuses on the application of artificial intelligence to literature, cognitive science and mental and cultural diversity. For well over a year, they have worked on an experiment comparing humans poetry generation to that of ChatGPT, with the continuous help of Contreras Kallens and Matthews.

Before the event began, attendees expressed their curiosity and concerns about novel AI technology.

Lauren Scheuer, a writing specialist at the Keuka College Writing Center and Tompkins County local, described worries about the impact of ChatGPT on higher education.

Im concerned about how ChatGPT is being used to teach and to write and to generate content, Scheuer said.

Sarah Milliron grad, who is pursuing a Ph.D. in psychology, also said that she was concerned about ChatGPTs impact on academia as the technology becomes more widely used.

I suppose Im hoping [to gain] a bit of optimism [from this panel], Milliron said. I hope that they address ways that we can work together with AI as opposed to [having] it be something that we ignore or have it be something that we are trying to get rid of.

Dubreuil first explained that there has been a recent interest in artificial intelligence due to the impressive performance of ChatGPT and its successful marketing campaign.

All scholars, but especially humanities, are currently wondering if we should take into account the new capabilities of automated text generators, Dubreuil said.

Dubreuil expressed that scholars have varying concerns and ideas regarding ChatGPT.

Some [scholars] believe we should counteract [ChatGPTs consequences] by means of new policies, Dubreuil said. Other [scholars] complained about the lack of morality or the lack of political apropos that is exhibited by ChatGPT. Other [scholars] say that there is too much political apropos and political correctness.

Dubreuil noted that other scholars prophesy that AI could lead to the fall of humanity.

For example, historian Yuval Harari recently wrote about the 2022 Expert Survey on Progress in AI, which found that out of more than 700 surveyed top academics and researchers, half said that there was at least a 10 percent chance of human extinction or similarly permanent and severe disempowerment due to future AI systems.

Contreras Kallens then elaborated on their poetry experiment, which utilized what he referred to as fragment completion essentially, ChatGPT and Cornell undergraduates were both prompted to continue writing from two lines of poetry from an author such as Dickinson.

Contreras Kallens described that ChatGPT generally matched the poetry quality of a Cornell undergraduate, while expectedly falling short of the original authors writing. However, the author recognition program they used actually confused the artificial productions with the original authors work.

The final part of the project, which the group is currently refining, will measure whether students can differentiate between whether a fragment was completed by the original author, an undergraduate or by ChatGPT.

When describing the importance of this work, Contreras Kallens explained the concept of universal grammar a linguistics theory that suggests that people are innately biologically programmed to learn grammar. Thus, ChatGPTs being able to reach the writing quality of many humans challenges assumptions about technologys shortcomings.

[This model] invites a deeper reconsideration of language assumptions or language acquisition processing, Contreras Kallens said. And thats at least interesting.

Matthews then expressed that his interest in AI does not lie in its generative abilities but in the possibility of representing text numerically and computationally.

Often humanists are dealing with large volumes of text [and] they might be very different, Matthews said. [It is] fundamental to the humanities that we debate [with each other] about what texts mean, how they relate to one another were always putting different things into relation with one another. And it would be nice sometimes to have a computational or at least quantitative basis that we could maybe talk about, or debate or at least have access to.

Matthews described that autoregressive language models which refer to machine learning models that use past behavior models to predict the following word in a text reveal the perceived similarity between certain words.

Through assessing word similarity, Matthews found that ChatGPT contains gendered language bias, which he said reflects the bias in human communication.

For example, Matthews inputted the names Mary and James the most common female and male names in the United States along with Sam, which was used as a gender-neutral name. He found that James is closer to the occupations of lawyer, programmer and doctor than the other names, particularly Mary.

Matthews explained that these biases were more prevalent in previous language modeling systems, but that the makers of GPT-3.5 the embedding model of ChatGPT, as opposed to GPT-3, which is the model currently available to the public have acknowledged bias in their systems.

Its not just that [these models] learn language theyre also exposed to biases that are present in text, Matthews said. This can be visible in social contexts especially, and if were deploying these models, this has consequences if theyre used in decision making.

Matthews also demonstrated that encoding systems can textually analyze and compare literary works, such as those by Shakespeare and Dickinson, making them a valuable resource for humanists, especially regarding large texts.

Humanists are already engaged in thinking about these types of questions [referring to the models semantics and cultural analyses], Matthews said. But we might not have the capacity or the time to analyze the breadth of text that we want to and we might not be able to assign or even to recall all the things that were reading. So if were using this in parallel with the existing skill sets that humanists have, I think that this is really valuable.

Christiansen, who is part of a new University-wide committee looking into the potential use of generative AI, then talked about the opportunities and challenges of the use of AI in education and teaching.

Christiansen described that one positive pedagogical use of ChatGPT is to have students ask the software specific questions and then for the students to criticize the answers. He also explained that ChatGPT may help with the planning process of writing, which he noted many students frequently discount.

I think also, importantly, that [utilizing ChatGPT in writing exercises] can actually provide a bit of a level playing field for second language learners, of which we have many here at Cornell, Christiansen said.

Christiansen added that ChatGPT can act as a personal tutor, help students develop better audience sensitivity, work as a translator and provide summaries.

However, these models also have several limitations. For instance, ChatGPT knows very little about any events that occurred after September 2021 and will be clueless about recent issues, such as the Ukraine war.

Furthermore, Christiansen emphasized that these models can and will hallucinate which refers to their making up information, including falsifying references. He also noted that students could potentially use ChatGPT to violate academic integrity.

Overall, Dubreuil expressed concern for the impact of technologies such as ChatGPT on innovation. He explained that ChatGPT currently only reorganizes data, which falls short of true invention.

There is a wide range between simply incremental inventions and rearrangements that are such that they not only rearrange the content, but they reconfigure the given and the way the given was produced its meanings, its values and its consequences, Dubreuil said.

Dubreuil argued that if standards for human communication do not require invention, not only will AI produce work that is not truly creative, but humans may become less inventive as well.

It has to be said that through social media, especially through our algorithmic life, these days, we may have prepared our own minds to become much more similar to a chatbot. We may be reprogramming ourselves constantly and thats the danger, Dubreuil said. The challenge of AI is a provocation toward reform.

Correction, March 27, 2:26 p.m.: A previous version of this article incorrectly stated the time frame about which ChatGPT is familiar and the current leaders of the Humanities Lab. In addition, minor clarification has been added to the description of Christiansen and Dubreuils study on AI poetry generation. The Sun regrets these errors, and the article has been corrected.

More:
ChatGPT in the Humanities Panel: Researchers Share Concerns ... - Cornell University The Cornell Daily Sun

In Pics | Pope Francis gets an Artificial Intelligence-makeover … – WION

Written By: Mukul Sharma|Updated: Mar 28, 2023, 07:40 PM IST

Artificial Intelligence is the fad of these times, with latest depictions throwing up real side of previously unimaginable realities. Whether it's former US President Barack Obama enjoying a gala time at a beach with former German Chancellor Angela Merkel,or Donald Trump's 'artificial' arrest. The pictures of these imagined realities led by the 'Artificial' side of technological intelligence continues to stoke curious minds of the world.

Lately, Pope Francis, the head of Roman Catholic Church, became the centerof a few Artificial Intelligence showpieces, as they imagined him sporting some luxurious fashion.

Pope Francis is the Head of the Catholic Church, the bishop of Rome and sovereign of the Vatican City State. In this picture, Pope Francis holds a child at the end of an audience with people of Rho, near Milan, at the Paul VI hall in the Vatican on March 25, 2023. For many who are used to seeing Pope pursuing Catholic religious services, the AI-avatar shown further was nothing short of a shock.

'The Pope drip' becomes an AI-trend

Pope's AI virality reflects how AI-images will be believed as true in coming times

People believed Pope's AI images were real indeed

Read more from the original source:
In Pics | Pope Francis gets an Artificial Intelligence-makeover ... - WION

Artificial intelligence and Baptist associations – The Baptist Paper

Artificial intelligence applications such as ChatGPT are growing rapidly. The creators themselves are surprised by the fast-growing interest in this resource. Part of their surprise is that the accuracy rate is still not what they believe it will be soon.

I use ChatGPT as a research and background tool. I asked what it knew about Baptist associations. On Feb. 26, 2023, I asked the following question: What is the importance of Baptist associations to the Southern Baptist Convention?

Here is what ChatGPT said:

Baptist associations are important to the Southern Baptist Convention (SBC) for several reasons.

First, they provide a way for local churches to connect with each other and work together in missions and ministry. Associations typically consist of a group of churches in a geographic area that share common beliefs and practices. By joining together in an association, churches can pool their resources and collaborate on projects that would be difficult to undertake alone.

Second, associations serve as a conduit for communication and cooperation between local churches and the larger SBC. Associations are usually affiliated with a state convention, which in turn is affiliated with the SBC. Through this structure, churches can participate in the work of the convention and have a voice in its decision-making processes.

Third, associations provide a means for training and support for pastors and other church leaders. Many associations offer workshops, conferences and other resources to help pastors and leaders develop their skills and deepen their understanding of Baptist theology and practice.

Overall, Baptist associations play a vital role in the life of the Southern Baptist Convention, connecting local churches, facilitating cooperation and communication and providing support and training for church leaders.

I posted the response to social media groups for the network of Baptist associations and elsewhere. I asked if ChatGPT got it right. The response was positive some commented it was almost scary.

Was it complete and accurate? No, but it would point an inquirer in a generally correct direction.

This means the internet contains enough information that an initial understanding can be acquired about Baptist associations in the Southern Baptist tradition.

Use of the phrase in the Southern Baptist tradition is important because many of the three dozen Baptist denominations in North America have associations as part of their denominational structure.

First, artificial intelligence apps such as ChatGPT are helpful in summarizing information about Baptist associations in the Southern Baptist tradition. But they should not be accepted as authoritative. Baptist associations as a family are about a quality of relationships ChatGPT does not know.

Second, AI answers are incomplete.

At times they miss qualitative understandings. Such understandings require subjective experiences. The contextual and doctrinal understandings of associations differ widely throughout North America.

Third, AI may provide characteristics the inquirer has not thought about. This can be informative or at least cause a reader to ponder if they need to think more deeply about associations.

Fourth, the answers to specific questions we ask ChatGPT can change with each new update of the apps content database.

Fifth, the answer provided by an AI app should never be written or spoken without referring to the source. Attribution is imperative. I placed in quotes the answer AI gave me. I also dated when the information was accessed and indicated the specific app I used.

For churches in association and for individual churches and their leaders, artificial intelligence apps can be a great source of initial research on any topic relevant to church and denominational ministry.

On some religious, spiritual and theological questions, the app includes in its answer a cautionary statement that it is not an expert on the question asked or the subject of the inquiry. This is ethical and responsible.

One phrase I see in response to questions posed to ChatGPT is, As a language model AI, I dont have personal opinions or beliefs.

In our speaking and writing, lets all be this ethical and responsible.

Use ChatGPT and other similar apps for research and background information and to stimulate your thinking but always remember that your knowledge of our Kingdom relationships is much richer.

See the original post:
Artificial intelligence and Baptist associations - The Baptist Paper