Archive for the ‘Artificial Intelligence’ Category

Early Detection of Arthritis Now Possible Thanks to Artificial Intelligence – SciTechDaily

A new study finds that utilizing artificial intelligence could allow scientists to detect arthritis earlier.

Researchers have been able to teach artificial intelligence neural networks to distinguish between two different kinds of arthritis and healthy joints. The neural network was able to detect 82% of the healthy joints and 75% of cases of rheumatoid arthritis. When combined with the expertise of a doctor, it could lead to much more accurate diagnoses. Researchers are planning to investigate this approach further in another project.

This breakthrough by a team of doctors and computer scientists has been published in the journal Frontiers in Medicine.

There are many different varieties of arthritis, and determining which type of inflammatory illness is affecting a patients joints may be difficult. Computer scientists and physicians from Friedrich-Alexander-Universitt Erlangen-Nrnberg (FAU) and Universittsklinikum Erlangen have now taught artificial neural networks to distinguish between rheumatoid arthritis, psoriatic arthritis, and healthy joints in an interdisciplinary research effort.

Within the scope of the BMBF-funded project Molecular characterization of arthritis remission (MASCARA), a team led by Prof. Andreas Maier and Lukas Folle from the Chair of Computer Science 5 (Pattern Recognition) and PD Dr. Arnd Kleyer and Prof. Dr. Georg Schett from the Department of Medicine 3 at Universittsklinikum Erlangen was tasked with investigating the following questions: Can artificial intelligence (AI) recognize different forms of arthritis based on joint shape patterns? Is this strategy useful for making more precise diagnoses of undifferentiated arthritis? Is there any part of the joint that should be inspected more carefully during a diagnosis?

Currently, a lack of biomarkers makes correct categorization of the relevant form of arthritis challenging. X-ray pictures used to help diagnosis are also not completely trustworthy since their two-dimensionality is insufficiently precise and leaves room for interpretation. This is in addition to the challenge of placing the joint under examination for X-ray imaging.

To find the answers to its questions, the research team focused its investigations on the metacarpophalangeal joints of the fingers regions in the body that are very often affected early on in patients with autoimmune diseases such as rheumatoid arthritis or psoriatic arthritis. A network of artificial neurons was trained using finger scans from high-resolution peripheral quantitative computer tomography (HR-pQCT) with the aim of differentiating between healthy joints and those of patients with rheumatoid or psoriatic arthritis.

HR-pQCT was selected as it is currently the best quantitative method of producing three-dimensional images of human bones in the highest resolution. In the case of arthritis, changes in the structure of bones can be very accurately detected, which makes precise classification possible.

A total of 932 new HR-pQCT scans from 611 patients were then used to check if the artificial network can actually implement what it had learned: Can it provide a correct assessment of the previously classified finger joints?

The results showed that AI detected 82% of the healthy joints, 75% of the cases of rheumatoid arthritis, and 68% of the cases of psoriatic arthritis, which is a very high hit probability without any further information. When combined with the expertise of a rheumatologist, it could lead to much more accurate diagnoses. In addition, when presented with cases of undifferentiated arthritis, the network was able to classify them correctly.

We are very satisfied with the results of the study as they show that artificial intelligence can help us to classify arthritis more easily, which could lead to quicker and more targeted treatment for patients. However, we are aware of the fact that there are other categories that need to be fed into the network. We are also planning to transfer the AI method to other imaging methods such as ultrasound or MRI, which are more readily available, explains Lukas Folle.

Whereas the research team was able to use high-resolution computer tomography, this type of imaging is only rarely available to physicians under normal circumstances because of restraints in terms of space and costs. However, these new findings are still useful as the neural network detected certain areas of the joints that provide the most information about a specific type of arthritis which is known as intra-articular hotspots. In the future, this could mean that physicians could use these areas as another piece in the diagnostic puzzle to confirm suspected cases, explains Dr. Kleyer. This would save time and effort during the diagnosis and is already in fact possible using ultrasound, for example. Kleyer and Maier are planning to investigate this approach further in another project with their research groups.

Reference: Deep Learning-Based Classification of Inflammatory Arthritis by Identification of Joint Shape PatternsHow Neural Networks Can Tell Us Where to Deep Dive Clinically by Lukas Folle, David Simon, Koray Tascilar, Gerhard Krnke, Anna-Maria Liphardt, Andreas Maier, Georg Schett and Arnd Kleyer, 10 March 2022, Frontiers in Medicine.DOI: 10.3389/fmed.2022.850552

View original post here:
Early Detection of Arthritis Now Possible Thanks to Artificial Intelligence - SciTechDaily

Val Kilmers Return: A.I. Created 40 Models to Revive His Voice Ahead of Top Gun: Maverick – Variety

SPOILER ALERT: Do not read unless you have watched Top Gun: Maverick, in theaters now.

Top Gun fans knew ahead of time that Val Kilmer would be reprising his role of Tom Iceman Kazansky in the sequel, but the specifics of the actors return were a question mark considering Kilmer lost the ability to speak after undergoing throat cancer treatment in 2014. The script for Top Gun: Maverick pulls from Kilmers real life, with Iceman also having cancer and communicating through typing. Kilmer gets to say one brief line of dialogue. In real life Kilmers speaking voice has been revived courtesy of artificial intelligence.

Kilmer announced in August 2021 that he had partnered with Sonantic to create an A.I.-powered speaking voice for himself. The actor supplied the company with hours of archival footage featuring his speaking voice that was then fed through the companys algorithms and turned into a model. According to Fortune, this process was used again for the actors Top Gun: Maverick appearance. However, a studio sources tells Variety no A.I. was used in the making of the movie.

In the end, we generated more than 40 different voice models and selected the best, highest-quality, most expressive one, John Flynn, CTO and cofounder of Sonantic, said in a statement to Forbes about reviving Kilmers voice. Those new algorithms are now embedded into our voice engine, so future clients can automatically take advantage of them as well.

Im grateful to the entire team at Sonantic who masterfully restored my voice in a way Ive never imagined possible, Kilmer originally said in a statement about the A.I. As human beings, the ability to communicate is the core of our existence and the side effects from throat cancer have made it difficult for others to understand me. The chance to narrate my story, in a voice that feels authentic and familiar, is an incredibly special gift.

As Fortune reports: After cleaning up old audio recordings of Kilmer, [Sonantic] used a voice engine to teach the voice model how to speak like Kilmer. The engine had around 10 times less data than it would have been given in a typical project, Sonantic said, and it wasnt enough. The company then decided to come up with new algorithms that could produce a higher-quality voice model using the available data.

Top Gun: Maverick is now playing in theaters nationwide.

Read more from the original source:
Val Kilmers Return: A.I. Created 40 Models to Revive His Voice Ahead of Top Gun: Maverick - Variety

Creating artificial intelligence that acts mo – EurekAlert

A research group from the Graduate School of Informatics, Nagoya University, has taken a big step towards creating a neural network with metamemory through a computer-based evolution experiment.

In recent years, there has been rapid progress in designing artificial intelligence technology using neural networks that imitate brain circuits. One goal of this field of research is understanding the evolution of metamemory to use it to create artificial intelligence with a human-like mind.

Metamemory is the process by which we ask ourselves whether we remember what we had for dinner yesterday and then use that memory to decide whether to eat something different tonight. While this may seem like a simple question, answering it involves a complex process. Metamemory is important because it involves a person having knowledge of their own memory capabilities and adjusting their behavior accordingly.

In order to elucidate the evolutionary basis of the human mind and consciousness, it is important to understand metamemory, explains lead author Professor Takaya Arita. A truly human-like artificial intelligence, which can be interacted with and enjoyed like a family member in a persons home, is an artificial intelligence that has a certain amount of metamemory, as it has the ability to remember things that it once heard or learned.

When studying metamemory, researchers often employ a delayed matching-to-sample task. In humans, this task consists of the participant seeing an object, such as a red circle, remembering it, and then taking part in a test to select the thing that they had previously seen from multiple similar objects. Correct answers are rewarded and wrong answers punished. However, the subject can choose not to do the test and still earn a smaller reward.

A human performing this task would naturally use their metamemory to consider if they remembered seeing the object. If they remembered it, they would take the test to get the bigger reward, and if they were unsure, they would avoid risking the penalty and receive the smaller reward instead. Previous studies reported that monkeys could perform this task as well.

The Nagoya University team comprising Professor Takaya Arita, Yusuke Yamato, and Reiji Suzuki of the Graduate School of Informatics created an artificial neural network model that performed the delayed matching-to-sample task and analyzed how it behaved.

Despite starting from random neural networks that did not even have a memory function, the model was able to evolve to the point that it performed similarly to the monkeys in previous studies. The neural network could examine its memories, keep them, and separate outputs. The intelligence was able to do this without requiring any assistance or intervention by the researchers, suggesting the plausibility of it having metamemory mechanisms.The need for metamemory depends on the user's environment. Therefore, it is important for artificial intelligence to have a metamemory that adapts to its environment by learning and evolving, says Professor Arita of the finding. The key point is that the artificial intelligence learns and evolves to create a metamemory that adapts to its environment.

Creating an adaptable intelligence with metamemory is a big step towards making machines that have memories like ours. The team is enthusiastic about the future, This achievement is expected to provide clues to the realization of artificial intelligence with a human-like mind and even consciousness.

The research results were published in the online edition of the international scientific journal Scientific Reports. The study was partly supported by a JSPS/MEXT Grants-in-Aid for Scientific Research KAKENHI (JP17H06383 in #4903).

Scientific Reports

Evolution of metamemory based on self-reference to own memory in artificial neural network with neuromodulation

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Read the rest here:
Creating artificial intelligence that acts mo - EurekAlert

Redefining success in business with Artificial Intelligence – Free Press Journal

For unleashing the full potential of Artificial Intelligence (AI), contemporary leadership that can augment AI and humans instead of strategizing to replace AI with humans is the need of the hour from a balanced social, legal, economic, political, and technological perspective.

It is a pre-requisite for the organization in the current scenario post-pandemic to realign the business processes with AI considering adaptability, complexity, scalability, decision making, and customization of products and services. For example, in Morgan Stanley Robo-advisors offers clients an array of investment options based on real-time market information, in Pfizer wearable sensors for Parkinsons patients, track symptoms 24/7 allowing customized treatment and in Unilever automated applicant screening vividly inflates the pool of qualified candidates for hiring managers to appraise.

According to Harvard business review, it is predicted that the performance and bottom line of organizations is enhanced when humans and AI augment each other leading to enhancing life skills of individuals and teams and technical skills of machines with the right fusion of learning and development activities.

Pandemic and resultant transformations have fast-tracked the mechanization of many everyday jobs, with future skepticism towards artificial intelligence (AI) adding to the increase in the rate of unemployment. In reality, if actions are taken appropriately and strategically by the organizations and government combined the reverse might be true that is AI will add to more jobs. For redefining success with AI, humans need to execute three vital roles. These roles are at three entry points input, process, and output. At the input level, humans need to give the right information and construct machines to accomplish a particular departmental and organizational objective. At the process point, the role of humans is to maintain ethics and integrity by responsible use of the machines.

At the output level keep a check on quality and quantity of output to maintain the code of conduct and avoid controversial episodes like Tay an artificial intelligence chatter released by Microsoft Corporation via Twitter in 2016 caused ensuing controversy when the bot began to tweet provocative and offensive messages over its Twitter account, triggering Microsoft to close the service only 16 hours post its launch.

Humans also play a major role in developing traits of virtual personal assistants like Google Assistant, Siri, and Alexa to warrant that they precisely echo their organizations brands be it Apple or Amazon. AI can also increase creativity, which will empower individuals to focus on their unique human strengths like first impressions and judgments. Reflect how Autodesks Dreamcatcher AI captures the creative mind of the best designers. The designer provides the Dreamcatcher with the standard for the desired product. Then the designer can control the software to tell which chair he/she likes or dislikes, leading to a new design round. This allows designers to focus on leveraging their unique human strengths of expert judgment and aesthetics.

In the current context of COVID-19, organizations have speedily applied machine learning expertise in several areas, including expanding customer communication, understanding the COVID-19 epidemic, and augmenting research and treatment. An example is Clevy.io, a French start-up. It launched a chatbot that uses real-time information from the French administration and the WHO to assess recognized symptoms and respond to queries about government policy.

Redefining success for the business, leveraging the utility of AI and humans involves more than the execution of strategic AI plans. It also necessitates a substantial assurance to evolving individuals and teams with the right blend of up skilling and reskilling so that it will empower individuals to take actions and give quantifiable results effectually at the human-machine edge.

(This article is authored by Dr Kasturi R Naik, Assistant Professor, DESs NMITD and Dr Srinivasan R Iyengar Director, JBIMS)

(To receive our E-paper on whatsapp daily, please click here.To receive it on Telegram, please click here. We permit sharing of the paper's PDF on WhatsApp and other social media platforms.)

View post:
Redefining success in business with Artificial Intelligence - Free Press Journal

Artificial intelligence spotted inventing its own creepy language and its baffling researchers… – The US Sun

AN ARTIFICIAL intelligence program has developed its own language and no one can understand it.

OpenAI is an artificial intelligence systems developer - their programs are fantastic examples of super-computing but there are quirks.

3

3

3

DALLE-E2 is OpenAI's latest AI system - it can generate realistic or artistic images from user-entered text descriptions.

DALLE-E2 represents a milestone in machine learning - OpenAI's site says the program "learned the relationship between images and the text used to describe them."

A DALLE-E2 demonstration includes interactive keywords for visiting users to play with and generate images - toggling different keywords will result in different images, styles, and subjects.

But the system has one strange behavior - it's writing its own language of random arrangements of letters, and researchers don't know why.

Giannis Daras, a computer science Ph.D. student at the University of Texas, published a Twitter thread detailing DALLE-E2's unexplained new language.

Daras told DALLE-E2 to create an image of "farmers talking about vegetables" and the program did so, but the farmers' speech read "vicootes" - some unknown AI word.

Daras fed "vicootes" back into the DALLE-E2 system and got back pictures of vegetables.

"We then feed the words: 'Apoploe vesrreaitars' and we get birds." Daras wrote on Twitter.

"It seems that the farmers are talking about birds, messing with their vegetables!"

Daras and a co-author have written a paper on DALLE-E2's "hidden vocabulary".

They acknowledge that telling DALLE-E2 to generate images of words - the command "an image of the word airplane" is Daras' example - normally results in DALLE-E2 spitting out "gibberish text".

When plugged back into DALLE-E2, that gibberish text will result in images of airplanes - which says something about the way DALLE-E2 talks to and thinks of itself.

Some AI researchers argued that DALLE-E2's gibberish text is "random noise".

Hopefully, we don't come to find the DALLE-E2's second language was a security flaw that needed patching after it's too late.

We pay for your stories!

Do you have a story for The US Sun team?

Visit link:
Artificial intelligence spotted inventing its own creepy language and its baffling researchers... - The US Sun