Media Search:



Creating artificial intelligence that acts mo – EurekAlert

A research group from the Graduate School of Informatics, Nagoya University, has taken a big step towards creating a neural network with metamemory through a computer-based evolution experiment.

In recent years, there has been rapid progress in designing artificial intelligence technology using neural networks that imitate brain circuits. One goal of this field of research is understanding the evolution of metamemory to use it to create artificial intelligence with a human-like mind.

Metamemory is the process by which we ask ourselves whether we remember what we had for dinner yesterday and then use that memory to decide whether to eat something different tonight. While this may seem like a simple question, answering it involves a complex process. Metamemory is important because it involves a person having knowledge of their own memory capabilities and adjusting their behavior accordingly.

In order to elucidate the evolutionary basis of the human mind and consciousness, it is important to understand metamemory, explains lead author Professor Takaya Arita. A truly human-like artificial intelligence, which can be interacted with and enjoyed like a family member in a persons home, is an artificial intelligence that has a certain amount of metamemory, as it has the ability to remember things that it once heard or learned.

When studying metamemory, researchers often employ a delayed matching-to-sample task. In humans, this task consists of the participant seeing an object, such as a red circle, remembering it, and then taking part in a test to select the thing that they had previously seen from multiple similar objects. Correct answers are rewarded and wrong answers punished. However, the subject can choose not to do the test and still earn a smaller reward.

A human performing this task would naturally use their metamemory to consider if they remembered seeing the object. If they remembered it, they would take the test to get the bigger reward, and if they were unsure, they would avoid risking the penalty and receive the smaller reward instead. Previous studies reported that monkeys could perform this task as well.

The Nagoya University team comprising Professor Takaya Arita, Yusuke Yamato, and Reiji Suzuki of the Graduate School of Informatics created an artificial neural network model that performed the delayed matching-to-sample task and analyzed how it behaved.

Despite starting from random neural networks that did not even have a memory function, the model was able to evolve to the point that it performed similarly to the monkeys in previous studies. The neural network could examine its memories, keep them, and separate outputs. The intelligence was able to do this without requiring any assistance or intervention by the researchers, suggesting the plausibility of it having metamemory mechanisms.The need for metamemory depends on the user's environment. Therefore, it is important for artificial intelligence to have a metamemory that adapts to its environment by learning and evolving, says Professor Arita of the finding. The key point is that the artificial intelligence learns and evolves to create a metamemory that adapts to its environment.

Creating an adaptable intelligence with metamemory is a big step towards making machines that have memories like ours. The team is enthusiastic about the future, This achievement is expected to provide clues to the realization of artificial intelligence with a human-like mind and even consciousness.

The research results were published in the online edition of the international scientific journal Scientific Reports. The study was partly supported by a JSPS/MEXT Grants-in-Aid for Scientific Research KAKENHI (JP17H06383 in #4903).

Scientific Reports

Evolution of metamemory based on self-reference to own memory in artificial neural network with neuromodulation

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Read the rest here:
Creating artificial intelligence that acts mo - EurekAlert

Redefining success in business with Artificial Intelligence – Free Press Journal

For unleashing the full potential of Artificial Intelligence (AI), contemporary leadership that can augment AI and humans instead of strategizing to replace AI with humans is the need of the hour from a balanced social, legal, economic, political, and technological perspective.

It is a pre-requisite for the organization in the current scenario post-pandemic to realign the business processes with AI considering adaptability, complexity, scalability, decision making, and customization of products and services. For example, in Morgan Stanley Robo-advisors offers clients an array of investment options based on real-time market information, in Pfizer wearable sensors for Parkinsons patients, track symptoms 24/7 allowing customized treatment and in Unilever automated applicant screening vividly inflates the pool of qualified candidates for hiring managers to appraise.

According to Harvard business review, it is predicted that the performance and bottom line of organizations is enhanced when humans and AI augment each other leading to enhancing life skills of individuals and teams and technical skills of machines with the right fusion of learning and development activities.

Pandemic and resultant transformations have fast-tracked the mechanization of many everyday jobs, with future skepticism towards artificial intelligence (AI) adding to the increase in the rate of unemployment. In reality, if actions are taken appropriately and strategically by the organizations and government combined the reverse might be true that is AI will add to more jobs. For redefining success with AI, humans need to execute three vital roles. These roles are at three entry points input, process, and output. At the input level, humans need to give the right information and construct machines to accomplish a particular departmental and organizational objective. At the process point, the role of humans is to maintain ethics and integrity by responsible use of the machines.

At the output level keep a check on quality and quantity of output to maintain the code of conduct and avoid controversial episodes like Tay an artificial intelligence chatter released by Microsoft Corporation via Twitter in 2016 caused ensuing controversy when the bot began to tweet provocative and offensive messages over its Twitter account, triggering Microsoft to close the service only 16 hours post its launch.

Humans also play a major role in developing traits of virtual personal assistants like Google Assistant, Siri, and Alexa to warrant that they precisely echo their organizations brands be it Apple or Amazon. AI can also increase creativity, which will empower individuals to focus on their unique human strengths like first impressions and judgments. Reflect how Autodesks Dreamcatcher AI captures the creative mind of the best designers. The designer provides the Dreamcatcher with the standard for the desired product. Then the designer can control the software to tell which chair he/she likes or dislikes, leading to a new design round. This allows designers to focus on leveraging their unique human strengths of expert judgment and aesthetics.

In the current context of COVID-19, organizations have speedily applied machine learning expertise in several areas, including expanding customer communication, understanding the COVID-19 epidemic, and augmenting research and treatment. An example is Clevy.io, a French start-up. It launched a chatbot that uses real-time information from the French administration and the WHO to assess recognized symptoms and respond to queries about government policy.

Redefining success for the business, leveraging the utility of AI and humans involves more than the execution of strategic AI plans. It also necessitates a substantial assurance to evolving individuals and teams with the right blend of up skilling and reskilling so that it will empower individuals to take actions and give quantifiable results effectually at the human-machine edge.

(This article is authored by Dr Kasturi R Naik, Assistant Professor, DESs NMITD and Dr Srinivasan R Iyengar Director, JBIMS)

(To receive our E-paper on whatsapp daily, please click here.To receive it on Telegram, please click here. We permit sharing of the paper's PDF on WhatsApp and other social media platforms.)

View post:
Redefining success in business with Artificial Intelligence - Free Press Journal

Artificial intelligence spotted inventing its own creepy language and its baffling researchers… – The US Sun

AN ARTIFICIAL intelligence program has developed its own language and no one can understand it.

OpenAI is an artificial intelligence systems developer - their programs are fantastic examples of super-computing but there are quirks.

3

3

3

DALLE-E2 is OpenAI's latest AI system - it can generate realistic or artistic images from user-entered text descriptions.

DALLE-E2 represents a milestone in machine learning - OpenAI's site says the program "learned the relationship between images and the text used to describe them."

A DALLE-E2 demonstration includes interactive keywords for visiting users to play with and generate images - toggling different keywords will result in different images, styles, and subjects.

But the system has one strange behavior - it's writing its own language of random arrangements of letters, and researchers don't know why.

Giannis Daras, a computer science Ph.D. student at the University of Texas, published a Twitter thread detailing DALLE-E2's unexplained new language.

Daras told DALLE-E2 to create an image of "farmers talking about vegetables" and the program did so, but the farmers' speech read "vicootes" - some unknown AI word.

Daras fed "vicootes" back into the DALLE-E2 system and got back pictures of vegetables.

"We then feed the words: 'Apoploe vesrreaitars' and we get birds." Daras wrote on Twitter.

"It seems that the farmers are talking about birds, messing with their vegetables!"

Daras and a co-author have written a paper on DALLE-E2's "hidden vocabulary".

They acknowledge that telling DALLE-E2 to generate images of words - the command "an image of the word airplane" is Daras' example - normally results in DALLE-E2 spitting out "gibberish text".

When plugged back into DALLE-E2, that gibberish text will result in images of airplanes - which says something about the way DALLE-E2 talks to and thinks of itself.

Some AI researchers argued that DALLE-E2's gibberish text is "random noise".

Hopefully, we don't come to find the DALLE-E2's second language was a security flaw that needed patching after it's too late.

We pay for your stories!

Do you have a story for The US Sun team?

Visit link:
Artificial intelligence spotted inventing its own creepy language and its baffling researchers... - The US Sun

Artificial Intelligence-enabled Drug Discovery Competitive Analysis Report 2022: A Benchmarking System to Spark Companies to Action – Innovation that…

DUBLIN--(BUSINESS WIRE)--The "Artificial Intelligence-enabled Drug Discovery, 2022: Frost Radar Report" report has been added to ResearchAndMarkets.com's offering.

This report presents competitive profiles on each of the companies based on their strengths, opportunities, and a small discussion on their positioning.

The report finds that the impact of AI on the entire pharma value chain can more than double what is achievable using traditional analytics and capture between 2% and 3% of industry revenue, amounting to more than $50 billion in potential annual impact.

Pharmaceutical drug discovery and development has been suffering from declining success rates with new molecules primarily because of poor external validity of preclinical models and lack of efficacy of the molecule in terms of the intended disease indication.

Drug success rates continue to be in the range of only 1 in 10 that enters clinical phases pushing through to FDA approval. Frost & Sullivan finds that traditional solutions focused primarily on data from limited sources and rule-based computational techniques used to address the understanding of targets and leads are inefficient.

Artificial intelligence (AI) is set to transform the drug discovery landscape. AI-based products and solutions are transforming drug discovery and development dynamics by enabling pharmaceutical players to shorten discovery timelines, enhance process agility, increase prediction accuracy on efficacy and safety, and improve the opportunity to diversify drug pipelines using a cost-effective model.

Most pharmaceutical vendors are focused on collecting, creating, and augmenting data from across laboratories, clinical trials, real-world evidence, biobanks, and repositories. The increasing volume and veracity of clinical and research data is compelling traditional providers to leverage enabling tools and technologies such as cloud computing, AI and machine learning, natural language processing, and advanced analytics to make a shift to a relatively fast, rational data-driven drug discovery and development approach.

To remain competitive, companies must strike the right balance of data, AI, and computational capability and match it with the wet lab capability. There remains inadequate understanding of the biological networks and drug-target interactions. Enter AI, which has been able to support the identification and prioritization of disease-specific therapeutic targets based on gene-disease associations. Such results must be replicated and validated through in vitro experiments and in vivo models.

Key Topics Covered:

1. Strategic Imperative and Growth Environment

2. Frost Radar

3. Companies to Action

4. Strategic Insights

5. Next Steps

For more information about this report visit https://www.researchandmarkets.com/r/h6d2f7

Read the original:
Artificial Intelligence-enabled Drug Discovery Competitive Analysis Report 2022: A Benchmarking System to Spark Companies to Action - Innovation that...

Artificial intelligence tool learns song of the reef to determine ecosystem health – Cosmos

Coral reefs are among Earths most stunning and biodiverse ecosystems. Yet, due to human-induced climate change resulting in warmer oceans, we are seeing growing numbers of these living habitats dying.

The urgency of the crisis facing coral reefs around the world was highlighted in a recent study that showed that 91% of Australias Great Barrier Reef had experienced coral bleaching in the summer of 202122 due to heat stress from rising water temperatures.

Determining reef health is key to gauging the extent of the problem and developing ways of intervening to save these ecosystems, and a new artificial intelligence (AI) tool has been developed to measure reef health using sound.

Research coming out of the UK is using AI to study the soundscape of Indonesian reefs to determine the health of the ecosystems. The results, published in Ecological Indicators, shows that the AI tool could learn the song of the reef and determine reef health with 92% accuracy.

The findings are being used to track the progress of reef restoration.

More on artificial intelligence: Are machine-learning tools the future of healthcare?

Coral reefs are facing multiple threats, including climate change, so monitoring their health and the success of conservation projects is vital, says lead author Ben Williams of the UKs University of Exeter.

Get an update of science stories delivered straight to your inbox.

One major difficulty is that visual and acoustic surveys of reefs usually rely on labour-intensive methods. Visual surveys are also limited by the fact that many reef creatures conceal themselves, or are active at night, while the complexity of reef sounds has made it difficult to identify reef health using individual recordings.

Our approach to that problem was to use machine learning to see whether a computer could learn the song of the reef. Our findings show that a computer can pick up patterns that are undetectable to the human ear. It can tell us faster, and more accurately, how the reef is doing.

Fish and other creatures make a variety of sounds in coral reefs. While the meaning of many of these calls remains a mystery, the new machine-learning algorithm can distinguish overall between healthy and unhealthy reefs.

Recordings used in the study were taken at theMars Coral Reef Restoration Project, which is restoring heavily damaged reefs in Indonesia.

The studys co-author Dr Tim Lamont, a marine biologist at Lancaster University, said the AI method provides advantages in monitoring coral reefs.

This is a really exciting development, says Lamont. Sound recorders and AI could be used around the world to monitor the health of reefs, and discover whether attempts to protect and restore them are working.

In many cases its easier and cheaper to deploy an underwater hydrophone on a reef and leave it there than to have expert divers visiting the reef repeatedly to survey it, especially in remote locations.

Theres never been a more important time to explain the facts, cherish evidence-based knowledge and to showcase the latest scientific, technological and engineering breakthroughs. Cosmos is published by The Royal Institution of Australia, a charity dedicated to connecting people with the world of science. Financial contributions, however big or small, help us provide access to trusted science information at a time when the world needs it most. Please support us by making a donation or purchasing a subscription today.

Read more:
Artificial intelligence tool learns song of the reef to determine ecosystem health - Cosmos