Archive for the ‘Machine Learning’ Category

Machine Learning Tool Predicts Forms of Esophageal and Stomach … – Inside Precision Medicine

A new artificial intelligence tool predicts esophageal adenocarcinoma (EAC) and gastric cardia adenocarcinoma (GCA), a form of stomach cancer, at least three years prior to a diagnosis. Both cancers are highly fatal, and rates have risen sharply over the past five decades.

Researchers from the Lieutenant Colonel Charles S. Kettles Veterans Affairs Center for Clinical Management Research developed the machine learning model from the electronic medical records of 10 million US veterans. VA records are unique in that they are automatically linked to cancer registry outcomes allowing the researchers to look backwards in veterans health records for information that could be used to predict cancer. Their analysis included previous diagnoses, laboratory value results, weight, prescription history, and more.

We were able to identify individuals who developed adenocarcinoma of the esophagus or esophageal junction and used a form of machine learning to learn more about them, explains Joel Rubenstein, MD, a research scientist at theKettles VA Center and professor of internal medicine at Michigan Medicine, who named the model the Kettles Esophageal and Cardia Adenocarcinoma predictioN tool, or K-ECAN.

The team accessed the Veterans Health Administration (VHA) Corporate Data Warehouse to identify veterans diagnosed with EAC (8,430) or GCA (2,965) over a 13-year period and compared them to 10,256,887 controls. The cancer cohort was split in half. One half was used to develop the K-ECAN model for predicting cancer, another quarter was used to tune the model, with the final quarter validating the results. We found that the model predicts which individuals would develop these cancers at least three years before they did, Rubenstein says. The model was more accurate than published guidelines in predicting cancer and more accurate than other tools that are already available that have been previously validated. Their findings werepublished in Gastroenterology.

The greatest identified risk factor was age, but others were found to be associated with increased cancer risk including Barretts esophagus, a precancerous condition, and gastroesophageal reflux disease (GERD). However, the model revealed other somewhat unexpected factors including slightly elevated hematocrit, low HDL/elevated LDL, lower blood serum bicarbonate levels, and greater white blood cell counts.

All of the screening guidelines for esophageal cancer now rely on GERD symptomsheartburn and refluxto identify people who should get screening, says Rubenstein. And while GERD is associated with the cancer, it wasnt particularly important in terms of the amount of information provided to the model. Most people with GERD symptoms will never develop esophageal adenocarcinoma and gastric cardia adenocarcinoma. In addition, roughly half of the patients with this form of cancer never experienced prior GERD symptoms at all. This makes K-ECAN particularly useful because it can identify people who are at elevated risk, regardless of whether they have GERD symptoms or not, adds Rubenstein.

While current guidelines already consider screening in high-risk patients, Rubenstein notes that many providers are still unfamiliar with this recommendation and that fewer than 20% of people who have developed the cancer have had prior screening.

We envision this tool being integrated seamlessly in the electronic health record to notify providers of their patients elevated risk, Rubenstein explains. Providers would receive automated notification alerts regarding which patients are at an increased risk of developing ECA and GCA. They could then consider screening when an individual is due for a colonoscopy or when refilling acid-reducing medication as colonoscopy and upper endoscopy can be performed at the same time.

Currently, Rubensteins team is piloting the tool at the Kettles VA facility.

Read this article:
Machine Learning Tool Predicts Forms of Esophageal and Stomach ... - Inside Precision Medicine

The rise of Machine Learning Robots: Explore machine learning in … – Robotics Tomorrow

Machine learning robots are changing the way humans interact with technology and also the way technology interacts with the world around it. These robots use machine learning skills to acquire knowledge and improve their performance over time. The field of Artificial Intelligence includes deep learning, as a branch of machine learning, which further boosts the capabilities of these robots by enabling them to process complex data and recognise meaningful patterns.

Among the latest technological advances, artificial intelligence (AI) and machine learning have become increasingly significant. The transformative capacity they bring with them is notorious in different fields, and one example of this is robotics. Machine learning robots are changing the way machines interact with their environment, acquiring knowledge and adapting to new situations. This article explores several trending topics in the technology sector such as machine learning robots, their relationship with deep learning, the intersection of robotics and machine learning, as well as the differences between artificial intelligence and machine learning.

To provide a simple example that shows how machine learning works, we can consider the one applied by streaming platforms: it is based on user behaviour for future recommendations of audiovisual content. The platform's recommendations are not static but adapt as the user's preferences change.

A machine learning robot is a type of robot that includes these machine learning techniques to acquire knowledge and improve its responsiveness, based on what it learns. These robots are designed to collect data from their environment using a variety of sensors, process the information and adjust their behaviour based on the data collected, greatly extending their autonomy.

The machine learning process allows robots to recognise patterns that help them understand their environment and perform specific tasks more efficiently by applying what they learn. By using machine learning algorithms, robots can learn autonomously without requiring specific programming for each task.

DEEP LEARNING AND MACHINE LEARNING ROBOTS In technical terms, deep learning is a model within machine learning that is of particular interest to the robotics sector. This model is based on layered algorithms known as artificial neural networks, imitating the functioning of the human brain for data processing.

These neural networks allow deep learning robots to process complex data, extract meaningful characteristics, assess whether the prediction it is making is accurate or not, and thus make more accurate decisions.

In short, the development of deep learning algorithms aims to make them increasingly efficient with less human supervision.

Thus, the ability of robots to identify objects, recognise speech and understand natural language is driven by deep learning techniques.

MACHINE LEARNING AND ROBOTICS The intersection of robotics and machine learning introduces new possibilities for the autonomy of mobile robots and for the intelligence of their task execution. Machine learning robots are being used in a wide range of applications, from inspection and maintenance or surveillance to manufacturing and healthcare.

Surveillance functions that a mobile robot is able to perform efficiently (such as maintenance rounds in an infrastructure), reach higher levels of accuracy and anticipation thanks to machine learning algorithms.

In the manufacturing industry, machine learning robots can improve the efficiency and accuracy of production processes by learning to perform complex tasks more quickly and accurately. In healthcare, we can already see the value they bring by assisting in surgeries, making accurate diagnoses or providing personalised patient care.

WHAT IS THE DIFFERENCE BETWEEN AI AND MACHINE LEARNING? The main difference between artificial intelligence and machine learning lies in their focus and application.

Artificial intelligence seeks to develop systems capable of performing tasks that require human intelligence, such as speech recognition, decision making and natural language understanding. Moreover, AI works with structured data as well as semi-structured and unstructured data.

Machine learning, on the other hand, focuses on teaching machines to learn from data, improving their performance as they acquire more information. Instead of explicitly programming each step, machine learning allows robots to adapt and improve their behaviour autonomously. Deep learning only works with structured or semi-structured data.

In summary, artificial intelligence is a broad field that involves a variety of techniques and approaches, while machine learning is a specific technique used to train machines to learn and improve their accuracy from experience.

WHICH IS BETTER, ARTIFICIAL INTELLIGENCE OR MACHINE LEARNING? The question of which is better, artificial intelligence or machine learning, is not a simple one to answer. Artificial intelligence is a broader field that covers a variety of techniques, including machine learning. While artificial intelligence focuses on creating systems that mimic human intelligence in a general way, machine learning focuses on teaching machines to learn from experience and improve from data.

Artificial intelligence is the broader, more aspirational concept, while machine learning is a specific technique within artificial intelligence that has proven to be very effective in a variety of applications. In short, machine learning is a powerful tool used in the field of artificial intelligence.

CONCLUSION Machine learning robots are changing the way humans interact with technology and also the way technology interacts with the world around it. These robots use machine learning skills to acquire knowledge and improve their performance over time. The field of Artificial Intelligence includes deep learning, as a branch of machine learning, which further boosts the capabilities of these robots by enabling them to process complex data and recognise meaningful patterns.

While artificial intelligence and machine learning are related concepts, machine learning is a specific technique within the broader field of artificial intelligence. Finally, machine learning robots demonstrate the power of combining robotics and machine learning to create machines that are more intelligent, adaptive and ultimately useful to humans.

The rest is here:
The rise of Machine Learning Robots: Explore machine learning in ... - Robotics Tomorrow

UW-Madison: Cancer diagnosis and treatment could get a boost … – University of Wisconsin System

Thanks to machine learning algorithms, short pieces of DNA floating in the bloodstream of cancer patients can help doctors diagnose specific types of cancer and choose the most effective treatment for a patient.

The new analysis technique, created by University of WisconsinMadison researchersandpublished recently in Annals of Oncology, is compatible with liquid biopsy testing equipment already approved in the United States and in use in cancer clinics. This could speed the new methods path to helping patients.

Liquid biopsies rely on simple blood draws instead of taking a piece of cancerous tissue from a tumor with a needle.

Marina Sharifi

Liquid biopsies are much less invasive than a tissue biopsy which may even be impossible to do in some cases, depending on where a patients tumor is, saysMarina Sharifi, a professor of medicine and oncologist in UWMadisons School of Medicine and Public Health. Its much easier to do them multiple times over the course of a patients disease to monitor the status of cancer and its response to treatment.

Cancerous tumors shed genetic material, called cell-free DNA, into the bloodstream as they grow. But not all parts of a cancer cells DNA are likely to tumble away. Cells store some of their DNA by coiling it up in protective balls called histones. They unwrap sections to access parts of the genetic code as needed.

Kyle Helzer, a UWMadison bioinformatics scientist, says that parts of the DNA containing the genes that cancer cells use often are uncoiled more frequently and thus are more likely to fragment.

Were exploiting that larger distribution of those regions among cell-free DNA to identify cancer types, adds Helzer, who is also a co-lead author of the study along with Sharifi and scientist Jamie Sperger.

Shuang Zhao

The research team, led by UWMadison senior authorsShuang (George) Zhao, professor of human oncology, andJoshua Lang, professor of medicine, used DNA fragments found in blood samples from a past study of nearly 200 patients (some with, some without cancer), and new samples collected from more than 300 patients treated for breast, lung, prostate or bladder cancers at UWMadison and other research hospitals in the Big Ten Cancer Research Consortium.

The scientists divided each group of samples into two. One portion was used to train a machine-learning algorithm to identify patterns among the fragments of cell-free DNA, relatively unique fingerprints specific to different types of cancers. They used the other portion to test the trained algorithm. The algorithm topped 80 percent accuracy translating the results of a liquid biopsy into both a cancer diagnosis and the specific types of cancer afflicting a patient.

In addition, the machine learning approach was able to tell apart two subtypes of prostate cancer: the most common version, adenocarcinoma, and a swift-progressing variant called neuroendocrine prostate cancer (NEPC) that is resistant to standard treatment approaches. Because NEPC is often difficult to distinguish from adenocarcinoma, but requires aggressive action, it puts oncologists like Lang and Sharifi in a bind.

Joshua Lang

Currently, the only way to diagnose NEPC is via a needle biopsy of a tumor site, and it can be difficult to get a conclusive answer from this approach, even if we have a high clinical suspicion for NEPC, Sharifi says.

Liquid biopsies have advantages, Sperger adds, in that you dont have to know which tumor site to biopsy at, and it is much easier for the patient to get a standard blood draw.

The blood samples were processed using cell-free DNA sequencing technology marketed by Iowa-based Integrated DNA Technologies. Using standard panels like those currently in the clinic is a departure one that can reduce the time and cost of testing from other methods of fragmentomic analysis of cancer DNA in blood samples.

Most commercial panels have been developed around the most important cancer genes that indicate certain drugs for treatment, and they sequence those select genes, says Zhao. What weve shown is that we can use those same panels and same targeted genes to look at the fragmentomics of the cell-free DNA in a blood sample and identify the type of cancer a patient has.

The UW Carbone Cancer Centers Circulating Biomarker Core and Biospecimen Disease-Oriented Team contributed to the collection of the studys hundreds of patient samples.

This research was funded in part by grants from the National Institutes of Health (DP2 OD030734, 1UH2CA260389 and R01CA247479) and the Department of Defense (PC190039, PC200334 and PC180469.)

Written by Chris Barncard

Link to original story: https://news.wisc.edu/algorithmic-blood-test-analysis-will-ease-diagnosis-of-cancer-types-guide-treatment/

The rest is here:
UW-Madison: Cancer diagnosis and treatment could get a boost ... - University of Wisconsin System

Department of Energy Grant will Fund EECS Professor Lu’s … – University of California, Merced

UC Merced Computer Science and Engineering Professor Xiaoyi Luis leading a collaboration that secureda $4.35 million grant from the Department of Energy (DOE) to improve federated machine learning systems.

Lu is partnering with the University of Iowa and Argonne National Laboratory near Chicago to improve the understanding of scalable, federated, privacy-preserving machine learning. This project is one of five initiatives centered on distributed resilient systems in science that have collectively received $40 million in funding from the DOE.

"Scientific research is getting more complex and will need next-generation workflows as we move forward with larger data sets and new tools spread across the U.S.," Ceren Susut, DOE acting Associate Director of Science for Advanced Scientific Computing Research, said in a news release announcing the awards. "This program will explore how science can be conducted in this new environment - where tools and data are in multiple places but must be integrated in a high-performance fashion."

According to his abstract, Lu's proposal "aims to address the critical need for a scalable and resilient Federated Learning simulation and modeling system in the context of edge computing-related scientific research and exploration."

Federated Learning embodies a decentralized approach to training machine learning models, placing a strong emphasis on enhancing data privacy. In contrast to the traditional method that requires data to be transferred from client devices to global servers, Federated Learning harnesses raw data residing on edge devices to facilitate local model training. These edge devices, responsible for connecting various devices and facilitating network traffic between them, assume a pivotal role in this process.

"Federated learning is becoming an essential technique for machine learning on edge devices as the sheer amount of raw data generated by these devices requires real-time, effective data processing at the edge device ends," Lu wrote in his abstract. "The processed data carrying intelligent information must be encrypted for privacy protection, making federated learning the best solution for building a well-trained model across decentralized smart edge devices with secure and efficient data-sharing policies."

Lu and his partners propose a scalable and resilient federated learning simulation and modeling system. This system will empower users to harness privacy-preserving algorithms, introduce novel algorithms, and simulate as well as deploy a wide range of federated learning algorithms with privacy-preserving techniques.

"The proposed system brings forth substantial advantages for researchers and developers engaged in real-world federated learning systems," Lu explained. "It furnishes them with a valuable platform for conducting proof-of-concept implementations and performance validation, which are essential prerequisites before deploying and testing their machine learning models in real-world contexts. Additionally, the proposed system is poised to make a significant scientific impact on DOE-mission-based applications, including scientific machine learning and critical infrastructure, where concerns regarding data privacy hold significant weight."

See the original post here:
Department of Energy Grant will Fund EECS Professor Lu's ... - University of California, Merced

Can AI help in climate change? CSU researchers have the answer. – Source

A machine learning model created at CSU has improved forecasters confidence in storm predictions and is now used daily by the National Weather Services Storm Prediction Center and Weather Prediction Center.

The model, developed in the Department of Atmospheric Science by a team led by Schumacher, is capable of accurately predicting excessive rainfall, hail and tornadoes four to eight days in advance. The model is called CSU-MLP for Colorado State University-Machine Learning Probabilities.

Schumachers team worked with NWS forecasters over six years to test and refine the model for their purposes. The CSU code is now running on the Storm Prediction Centers and Weather Prediction Centers operational computer systems, helping forecasters predict hazardous weather, so people in harms way have enough lead time to prepare.

The atmospheric scientists trained the model on historical records of severe weather and NOAA reforecasts, retrospective forecasts run with todays improved numerical models.

Team member Allie Mazurek, a Ph.D. student, is working on explainable AI for the CSU-MLP forecasts. Shes trying to figure out which atmospheric data inputs are most important to the models predictions, so the model will be more transparent for forecasters.

These new tools that use AI for weather prediction are developing quickly and showing some really promising and exciting results, Schumacher said. But they also have limitations, just like traditional weather prediction models and human forecasters have strengths and limitations. The best way to advance the field and improve forecasts will be to take advantage of each of their strengths: the AI for what its good at, which is identifying patterns in massive datasets; numerical weather prediction models for being grounded in the physics; and humans for synthesizing, understanding and communicating.

Schumacher discusses the promise and limitations of AI for weather prediction in more detail in this piece in The Conversation, co-authored by Aaron Hill, a former CSU research scientist who is now a faculty member at the University of Oklahoma.

Read more from the original source:
Can AI help in climate change? CSU researchers have the answer. - Source