Archive for the ‘Machine Learning’ Category

A Nepalese Machine Learning (ML) Researcher Introduces Papers-With-Video Browser Extension Which Allows Users To Access Videos Related To Research…

Amit Chaudhary, a machine learning (ML) researcher from Nepal, has recently introduced a browser extension that allows users to directly access videos related to research papers published on the platform arXiv.

ArXiv has become an essential resource for new machine learning (ML) papers. Initially, in 1991, it was launched as a storage site for physics preprints. In 2001 it was named ArXiv and had since been hosted by Cornell University. ArXiv has received close to 2 million submissions across various scientific research fields.

Amit obtained publicly released videos from 2020 ML conferences. He then indexed the videos and reverse-mapped them to the relevant arXiv links through pyarxiv, a dedicated wrapper for the arXiv API. The Google Chrome extension creates a video icon next to the paper title on the arXiv abstract page, enabling users to identify and access available videos related to the paper directly.

Many research teams are creating videos to accompany their papers. These videos can act as a guide by providing demo and other valuable information on the research document. In several situations, the videos are created as an alternative to traditional in-person presentations at AI conferences. This is useful in current circumstances as almost all panels have moved to virtual forms due to the Covid-19 pandemic.

The Papers-With-Video extension enables direct video links for around 3.7k arXiv ML papers. Amit aims to figure out how to pair documents and videos related effectively but has different titles, and with this, he hopes to expand coverage to 8k videos. He has proposed community feedback and has now tweaked the extensions functionality based on user remarks and suggestions.

The browser extension is not available on the Google Chrome Web Store yet. However, one can find the extension, installation guide, and further information on GitHub.

GitHub: https://github.com/amitness/papers-with-video

Paper List: https://gist.github.com/amitness/9e5ad24ab963785daca41e2c4cfa9a82

Suggested

Read the rest here:
A Nepalese Machine Learning (ML) Researcher Introduces Papers-With-Video Browser Extension Which Allows Users To Access Videos Related To Research...

Machine Learning Shown to Identify Patient Response to Sarilumab in Rheumatoid Arthritis – AJMC.com Managed Markets Network

Machine learning was shown to identify patients with rheumatoid arthritis (RA) who present an increased chance of achieving clinical response with sarilumab, with those selected also showing an inferior response to adalimumab, according to an abstract presented at ACR Convergence, the annual meeting of the American College of Rheumatology (ACR).

In prior phase 3 trials comparing the interleukin 6 receptor (IL-6R) inhibitor sarilumab with placebo and the tumor necrosis factor (TNF-) inhibitor adalimumab, sarilumab appeared to provide superior efficacy for patients with moderate to severe RA. Although promising, the researchers of the abstract highlight that treatment of RA requires a more individualized approach to maximize efficacy and minimize risk of adverse events.

The characteristics of patients who are most likely to benefit from sarilumab treatment remain poorly understood, noted researchers.

Seeking to better identify the patients with RA who may best benefit from sarilumab treatment, the researchers applied machine learning to select from a predefined set of patient characteristics, which they hypothesized may help delineate the patients who could benefit most from either antiIL-6R or antiTNF- treatment.

Following their extraction of data from the sarilumab clinical development program, the researchers utilized a decision tree classification approach to build predictive models on ACR response criteria at week 24 in patients from the phase 3 MOBILITY trial, focusing on the 200-mg dose of sarilumab. They incorporated the Generalized, Unbiased, Interaction Detection and Estimation (GUIDE) algorithm, including 17 categorical and 25 continuous baseline variables as candidate predictors. These included protein biomarkers, disease activity scoring, and demographic data, added the researchers.

Endpoints used were ACR20, ACR50, and ACR70 at week 24, with the resulting rule validated through application on independent data sets from the following trials:

Assessing the end points used, it was found that the most successful GUIDE model was trained against the ACR20 response. From the 42 candidate predictor variables, the combined presence of anticitrullinated protein antibodies (ACPA) and C-reactive protein >12.3 mg/L was identified as a predictor of better treatment outcomes with sarilumab, with those patients identified as rule-positive.

These rule-positive patients, which ranged from 34% to 51% in the sarilumab groups across the 4 trials, were shown to have more severe disease and poorer prognostic factors at baseline. They also exhibited better outcomes than rule-negative patients for most end points assessed, except for patients with inadequate response to TNF inhibitors.

Notably, rule-positive patients had a better response to sarilumab but an inferior response to adalimumab, except for patients of the HAQ-Disability Index minimal clinically important difference end point.

If verified in prospective studies, this rule could facilitate treatment decision-making for patients with RA, concluded the researchers.

Reference

Rehberg M, Giegerich C, Praestgaard A, et al. Identification of a rule to predict response to sarilumab in patients with rheumatoid arthritis using machine learning and clinical trial data. Presented at: ACR Convergence 2020; November 5-9, 2020. Accessed January 15, 2021. 021. Abstract 2006. https://acrabstracts.org/abstract/identification-of-a-rule-to-predict-response-to-sarilumab-in-patients-with-rheumatoid-arthritis-using-machine-learning-and-clinical-trial-data/

Original post:
Machine Learning Shown to Identify Patient Response to Sarilumab in Rheumatoid Arthritis - AJMC.com Managed Markets Network

Deep Learning Outperforms Standard Machine Learning in Biomedical Research Applications, Research Shows – Georgia State University News

ATLANTACompared to standard machine learning models, deep learning models are largely superior at discerning patterns and discriminative features in brain imaging, despite being more complex in their architecture, according to a new study in Nature Communications led by Georgia State University.

Advanced biomedical technologies such as structural and functional magnetic resonance imaging (MRI and fMRI) or genomic sequencing have produced an enormous volume of data about the human body. By extracting patterns from this information, scientists can glean new insights into health and disease. This is a challenging task, however, given the complexity of the data and the fact that the relationships among types of data are poorly understood.

Deep learning, built on advanced neural networks, can characterize these relationships by combining and analyzing data from many sources. At the Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State researchers are using deep learning to learn more about how mental illness and other disorders affect the brain.

Although deep learning models have been used to solve problems and answer questions in a number of different fields, some experts remain skeptical. Recent critical commentaries have unfavorably compared deep learning with standard machine learning approaches for analyzing brain imaging data.

However, as demonstrated in the study, these conclusions are often based on pre-processed input that deprive deep learning of its main advantagethe ability to learn from the data with little to no preprocessing. Anees Abrol, research scientist at TReNDS and the lead author on the paper, compared representative models from classical machine learning and deep learning, and found that if trained properly, the deep-learning methods have the potential to offer substantially better results, generating superior representations for characterizing the human brain.

We compared these models side-by-side, observing statistical protocols so everything is apples to apples. And we show that deep learning models perform better, as expected, said co-author Sergey Plis, director of machine learning at TReNDS and associate professor of computer science.

Plis said there are some cases where standard machine learning can outperform deep learning. For example, diagnostic algorithms that plug in single-number measurements such as a patients body temperature or whether the patient smokes cigarettes would work better using classical machine learning approaches.

If your application involves analyzing images or if it involves a large array of data that cant really be distilled into a simple measurement without losing information, deep learning can help, Plis said.. These models are made for really complex problems that require bringing in a lot of experience and intuition.

The downside of deep learning models is they are data hungry at the outset and must be trained on lots of information. But once these models are trained, said co-author Vince Calhoun, director of TReNDS and Distinguished University Professor of Psychology, they are just as effective at analyzing reams of complex data as they are at answering simple questions.

Interestingly, in our study we looked at sample sizes from 100 to 10,000 and in all cases the deep learning approaches were doing better, he said.

Another advantage is that scientists can reverse analyze deep-learning models to understand how they are reaching conclusions about the data. As the published study shows, the trained deep learning models learn to identify meaningful brain biomarkers.

These models are learning on their own, so we can uncover the defining characteristics that theyre looking into that allows them to be accurate, Abrol said. We can check the data points a model is analyzing and then compare it to the literature to see what the model has found outside of where we told it to look.

The researchers envision that deep learning models are capable of extracting explanations and representations not already known to the field and act as an aid in growing our knowledge of how the human brain functions. They conclude that although more research is needed to find and address weaknesses of deep-learning models, from a mathematical point of view, its clear these models outperform standard machine learning models in many settings.

Deep learnings promise perhaps still outweighs its current usefulness to neuroimaging, but we are seeing a lot of real potential for these techniques, Plis said.

More here:
Deep Learning Outperforms Standard Machine Learning in Biomedical Research Applications, Research Shows - Georgia State University News

Predicting falls and injuries in people with multiple sclerosis using machine learning algorithms – DocWire News

This article was originally published here

Mult Scler Relat Disord. 2021 Jan 7;49:102740. doi: 10.1016/j.msard.2021.102740. Online ahead of print.

ABSTRACT

Falls in people with Multiple Sclerosis (PwMS) is a serious issue. It can lead to a lot of problems including sustaining injuries, losing consciousness and hospitalization. Having a model that can predict the probability of these falls and the factors correlated with them and can help caregivers and family members to have a clearer understanding of the risks of falling and proactively minimizing them. We used historical data and machine learning algorithms to predict three outcomes: falling, sustaining injuries and injury types caused by falling in PwMS. The training dataset for this study includes 606 examples of monthly readings. The predictive attributes are the following: Expanded Disability Status Scale (EDSS), years passed since the diagnosis of MS, age of participants in the beginning of the experiment, participants gender, type of MS and season (or month). Two types of algorithms, decision tree and gradient boosted trees (GBT) algorithm, were used to train six models to answer these three outcomes. After the models were trained their accuracy was evaluated using cross-validation. The models had a high accuracy with some exceeding 90%. We did not limit model evaluation to one-number assessments and studied the confusion matrices of the models as well. The GBT had a higher class recall and smaller number of underestimations, which make it a more reliable model. The methodology proposed in this study and its findings can help in developing better decision-support tools to assist PwMS.

PMID:33450500 | DOI:10.1016/j.msard.2021.102740

Originally posted here:
Predicting falls and injuries in people with multiple sclerosis using machine learning algorithms - DocWire News

Decentralized Autonomous Travel Solution Introduced by Fetch.ai, an AI and Machine Learning Network – Crowdfund Insider

The developers at Fetch.ai, an artificial intelligence (AI) and machine learning (ML) network, are introducing what they describe or refer to as decentralized autonomous travel.

Fetch.ai aims to connect to more than 770,000 hotels with its Autonomous Travel system.

The Autonomous AI Travel Agents intend to reduce the role of centralized aggregators and services, thereby encouraging direct provider-to-consumer interaction. These efforts should lead to considerable cost savings of around 10% for both hotels and consumers.

The Autonomous AI Travel Agents framework, developed by Fetch.ai, is not meant to completely replace current systems. It is supposed to complement them. As explained by Fetch.ai in a blog post, the system operates safely, non-destructively, and in parallel to existing relationships that hotels might have. It aims to offer an alternative way by which bookings may be handled: one where the customer and hotel may deal directly with each other, and also one where a more personalized, better value experience can be delivered.

As mentioned in the update, building further upon the Mobility Framework, Fetch.ai is announcing tools and services to enable Autonomous agent-based travel solutions.

As noted in the announcement, Fetch.ai has developed an applications framework to allow hotel operators to launch Autonomous AI Travel Agents to market. They are also able to negotiate and trade their existing inventory via the Fetch.ai network, while getting payments in fiat currencies or cryptos, all powered by the Fetch.ais native FET token.

As stated in the update:

The promise of the Fetch.ai network is that a decentralized, multi-agent based system will be able to provide a new, personalized, privacy focused travel solution and change the way we view and work with the hotel and travel industry.

Before the COVID-19 outbreak, many hotels across the globe had teamed up with service providers such as Expedia because they offer a useful, and intuitive platform to facilitate travel and without the exposure that they deliver, most hotels wouldnt be able to attract consumers, the announcement noted. This gives service providers such as Expedia sufficient leverage over the hotels who have partnered up with them and charge high commission.

As confirmed by Fetch.ai, with the onset of COVID, hotels are now facing a lot of pressure to stay afloat without going completely bankrupt. The Fetch.ai system allows hotel service providers to have their rooms marketed and booked without paying the standard 1520% commission from hotel marketplace aggregators.

Fetch.ai confirmed that theyll be publishing the code base and software toolkits for the Autonomous AI Travel Agents next month (February 2021).

(Note: for more details on this update, check here.)

Original post:
Decentralized Autonomous Travel Solution Introduced by Fetch.ai, an AI and Machine Learning Network - Crowdfund Insider