Archive for the ‘Machine Learning’ Category

High-performance, low-cost machine learning infrastructure is accelerating innovation in the cloud – MIT Technology Review

Artificial intelligence and machine learning (AI and ML) are key technologies that help organizations develop new ways to increase sales, reduce costs, streamline business processes, and understand their customers better. AWS helps customers accelerate their AI/ML adoption by delivering powerful compute, high-speed networking, and scalable high-performance storage options on demand for any machine learning project. This lowers the barrier to entry for organizations looking to adopt the cloud to scale their ML applications.

Developers and data scientists are pushing the boundaries of technology and increasingly adopting deep learning, which is a type of machine learning based on neural network algorithms. These deep learning models are larger and more sophisticated resulting in rising costs to run underlying infrastructure to train and deploy these models.

To enable customers to accelerate their AI/ML transformation, AWS is building high-performance and low-cost machine learning chips. AWS Inferentia is the first machine learning chip built from the ground up by AWS for the lowest cost machine learning inference in the cloud. In fact, Amazon EC2 Inf1 instances powered by Inferentia, deliver 2.3x higher performance and up to 70% lower cost for machine learning inference than current generation GPU-based EC2 instances. AWS Trainium is the second machine learning chip by AWS that is purpose-built for training deep learning models and will be available in late 2021.

Customers across industries have deployed their ML applications in production on Inferentia and seen significant performance improvements and cost savings. For example, AirBnBs customer support platform enables intelligent, scalable, and exceptional service experiences to its community of millions of hosts and guests across the globe. It used Inferentia-based EC2 Inf1 instances to deploy natural language processing (NLP) models that supported its chatbots. This led to a 2x improvement in performance out of the box over GPU-based instances.

With these innovations in silicon, AWS is enabling customers to train and execute their deep learning models in production easily with high performance and throughput at significantly lower costs.

Machine learning is an iterative process that requires teams to build, train, and deploy applications quickly, as well as train, retrain, and experiment frequently to increase the prediction accuracy of the models. When deploying trained models into their business applications, organizations need to also scale their applications to serve new users across the globe. They need to be able to serve multiple requests coming in at the same time with near real-time latency to ensure a superior user experience.

Emerging use cases such as object detection, natural language processing (NLP), image classification, conversational AI, and time series data rely on deep learning technology. Deep learning models are exponentially increasing in size and complexity, going from having millions of parameters to billions in a matter of a couple of years.

Training and deploying these complex and sophisticated models translates to significant infrastructure costs. Costs can quickly snowball to become prohibitively large as organizations scale their applications to deliver near real-time experiences to their users and customers.

This is where cloud-based machine learning infrastructure services can help. The cloud provides on-demand access to compute, high-performance networking, and large data storage, seamlessly combined with ML operations and higher level AI services, to enable organizations to get started immediately and scale their AI/ML initiatives.

AWS Inferentia and AWS Trainium aim to democratize machine learning and make it accessible to developers irrespective of experience and organization size. Inferentias design is optimized for high performance, throughput, and low latency, which makes it ideal for deploying ML inference at scale.

EachAWS Inferentiachip contains four NeuronCores that implement a high-performancesystolic arraymatrix multiply engine, which massively speeds up typical deep learning operations, such as convolution and transformers. NeuronCores are also equipped with a large on-chip cache, which helps to cut down on external memory accesses, reducing latency, and increasing throughput.

AWS Neuron, the software development kit for Inferentia, natively supports leading ML frameworks, likeTensorFlow andPyTorch. Developers can continue using the same frameworks and lifecycle developments tools they know and love. For many of their trained models, they can compile and deploy them on Inferentia by changing just a single line of code, with no additional application code changes.

The result is a high-performance inference deployment, that can easily scale while keeping costs under control.

Sprinklr, a software-as-a-service company, has an AI-driven unified customer experience management platform that enables companies to gather and translate real-time customer feedback across multiple channels into actionable insights. This results in proactive issue resolution, enhanced product development, improved content marketing, and better customer service. Sprinklr used Inferentia to deploy its NLP and some of its computer vision models and saw significant performance improvements.

Several Amazon services also deploy their machine learning models on Inferentia.

Amazon Prime Video uses computer vision ML models to analyze video quality of live events to ensure an optimal viewer experience for Prime Video members. It deployed its image classification ML models on EC2 Inf1 instances and saw a 4x improvement in performance and up to a 40% savings in cost as compared to GPU-based instances.

Another example is Amazon Alexas AI and ML-based intelligence, powered by Amazon Web Services, which is available on more than 100 million devices today. Alexas promise to customers is that it is always becoming smarter, more conversational, more proactive, and even more delightful. Delivering on that promise requires continuous improvements in response times and machine learning infrastructure costs. By deploying Alexas text-to-speech ML models on Inf1 instances, it was able to lower inference latency by 25% and cost-per-inference by 30% to enhance service experience for tens of millions of customers who use Alexa each month.

As companies race to future-proof their business by enabling the best digital products and services, no organization can fall behind on deploying sophisticated machine learning models to help innovate their customer experiences. Over the past few years, there has been an enormous increase in the applicability of machine learning for a variety of use cases, from personalization and churn prediction to fraud detection and supply chain forecasting.

Luckily, machine learning infrastructure in the cloud is unleashing new capabilities that were previously not possible, making it far more accessible to non-expert practitioners. Thats why AWS customers are already using Inferentia-powered Amazon EC2 Inf1 instances to provide the intelligence behind their recommendation engines and chatbots and to get actionable insights from customer feedback.

With AWS cloud-based machine learning infrastructure options suitable for various skill levels, its clear that any organization can accelerate innovation and embrace the entire machine learning lifecycle at scale. As machine learning continues to become more pervasive, organizations are now able to fundamentally transform the customer experienceand the way they do businesswith cost-effective, high-performance cloud-based machine learning infrastructure.

Learn more about how AWSs machine learning platform can help your company innovate here.

This content was produced by AWS. It was not written by MIT Technology Reviews editorial staff.

Link:
High-performance, low-cost machine learning infrastructure is accelerating innovation in the cloud - MIT Technology Review

Researchers Present Global Effort to Develop Machine Learning Tools for Automated Assessment of Radiographic Damage in Rheumatoid Arthritis -…

NEW YORK, Nov. 6, 2021 /PRNewswire/ -- Crowdsourcing has become an increasingly popular way to develop machine learning algorithms to address many clinical problems in a variety of illnesses. Today at the American College of Rheumatology (ACR) annual meeting, a multicenter team led by an investigator from Hospital for Special Surgery (HSS) presented the results from the RA2-DREAM Challenge, a crowdsourced effort focused on developing better methods to quantify joint damage in people with rheumatoid arthritis (RA).

Damage in the joints of people with RA is currently measured by visual inspection and detailed scoring on radiographic imagesof small joints in the hands, wrists and feet. This includes both joint space narrowing (which indicates cartilage loss) and bone erosions (which indicates damage from invasion of the inflamed joint lining). The scoring system requires specially trained experts and is time-consuming and expensive. Finding an automated way to measure joint damage is important for both clinical research and for care of patients, according to the study's senior author, S. Louis Bridges, Jr., MD, PhD, physician-in-chief and chair of the Department of Medicine at HSS.

"If a machine-learning approach could provide a quick, accurate quantitative score estimating the degree of joint damage in hands and feet, it would greatly help clinical research," he said. "For example, researchers could analyze data from electronic health records and from genetic and other research assays to find biomarkers associated with progressive damage. Having to score all the images by visual inspection ourselves would be tedious, and outsourcing it is cost prohibitive."

"This approach could also aid rheumatologists by quickly assessing whether there is progression of damage over time, which would prompt a change in treatment to prevent further damage," he added. "This is really important in geographic areas where expert musculoskeletal radiologists are not available."

For the challenge, Dr. Bridges and his collaborators partnered with Sage Bionetworks, a nonprofit organization that helps investigators create DREAM (Dialogue on Reverse Engineering Assessment and Methods) Challenges. These competitions are focused on the development of innovative artificial intelligence-based tools in the life sciences. The investigators sent out a call for submissions, with grant money providing prizes for the winning teams. Competitors were from a variety of fields, including computer scientists, computational biologists and physician-scientists; none were radiologists with expertise or training in reading radiographic images.

For the first part of the challenge, one set of images was provided to the teams, along with known scores that had been visually generated. These were used to train the algorithms. Additional sets of images were then provided so the competitors could test and refine the tools they had developed. In the final round, a third set of images was given without scores, and competitors estimated the amount of joint space narrowing and erosions. Submissions were judged according to which most closely replicated the gold-standard visually generated scores. There were 26 teams that submitted algorithms and 16 final submissions. In total, competitors were given 674 sets of images from 562 different RA patients, all of whom had participated in prior National Institutes of Health-funded research studies led by Dr. Bridges. In the end, four teams were named top performers.

For the DREAM Challenge organizers, it was important that any scoring system developed through the project be freely available rather than proprietary, so that it could be used by investigators and clinicians at no cost. "Part of the appeal of this collaboration was that everything is in the public domain," Dr. Bridges said.

Dr. Bridges explained that additional research and development of computational methods are needed before the tools can be broadly used, but the current research demonstrates that this type of approach is feasible. "We still need to refine the algorithms, but we're much closer to our goal than we were before the Challenge," he concluded.

About HSS

HSS is the world's leading academic medical center focused on musculoskeletal health. At its core is Hospital for Special Surgery, nationally ranked No. 1 in orthopedics (for the 12th consecutive year), No. 4 in rheumatology by U.S. News & World Report (2021-2022), and the best pediatric orthopedic hospital in NY, NJ and CT by U.S. News & World Report "Best Children's Hospitals" list (2021-2022). HSS is ranked world #1 in orthopedics by Newsweek (2021-2022). Founded in 1863, the Hospital has the lowest complication and readmission rates in the nation for orthopedics, and among the lowest infection rates. HSS was the first in New York State to receive Magnet Recognition for Excellence in Nursing Service from the American Nurses Credentialing Center five consecutive times. The global standard total knee replacement was developed at HSS in 1969. An affiliate of Weill Cornell Medical College, HSS has a main campus in New York City and facilities in New Jersey, Connecticut and in the Long Island and Westchester County regions of New York State, as well as in Florida. In addition to patient care, HSS leads the field in research, innovation and education. The HSS Research Institute comprises 20 laboratories and 300 staff members focused on leading the advancement of musculoskeletal health through prevention of degeneration, tissue repair and tissue regeneration. The HSS Global Innovation Institute was formed in 2016 to realize the potential of new drugs, therapeutics and devices. The HSS Education Institute is a trusted leader in advancing musculoskeletal knowledge and research for physicians, nurses, allied health professionals, academic trainees, and consumers in more than 130 countries. The institution is collaborating with medical centers and other organizations to advance the quality and value of musculoskeletal care and to make world-class HSS care more widely accessible nationally and internationally. http://www.hss.edu.

SOURCE Hospital for Special Surgery

http://www.hss.edu

More:
Researchers Present Global Effort to Develop Machine Learning Tools for Automated Assessment of Radiographic Damage in Rheumatoid Arthritis -...

Machine learning can provide strong predictive accuracy for identifying adolescents that have experienced suicidal thoughts and behavior – EurekAlert

image:Fig 7. The top 10 most important questions for males vs females. view more

Credit: Weller et al., 2021, PLOS ONE, CC-BY 4.0 (https://creativecommons.org/licenses/by/4.0/)

Researchers have developed a new, machine learning-based algorithm that shows high accuracy in identifying adolescents who are experiencing suicidal thoughts and behavior. Orion Weller of Johns Hopkins University in Baltimore, Maryland, and colleagues present these findings in the open-access journal PLOS ONE on November 3rd, 2021.

Decades of research have identified specific risk factors associated with suicidal thoughts and behavior among adolescents, helping to inform suicide prevention efforts. However, few studies have explored these risk factors in combination with each other, especially in large groups of adolescents. Now, the field of machine learning has opened up new opportunities for such research, which could ultimately improve prevention efforts.

To explore that opportunity, Weller and colleagues applied machine-learning analysis to data from a survey of high school students in Utah that is routinely conducted to monitor issues such as drug abuse and mental health. The data included responses to more than 300 questions each for more than 179,000 high school students who took the survey between 2011 to 2017, as well as demographic data from the U.S. census.

The researchers found that they could use the survey data to predict with 91 percent accuracy which individual adolescents answers indicated suicidal thoughts or behavior. In doing so, they were able to identify which survey questions had the most predictive power; these included questions about digital media harassment or threats, at-school bullying, serious arguments at home, gender, alcohol use, feelings of safety at school, age, and attitudes about marijuana.

The new algorithms accuracy is higher than that of previously developed predictive approaches, suggesting that machine-learning could indeed improve understanding of adolescent suicidal thoughts and behaviorand could thereby help inform and refine preventive programs and policies.

Future research could expand the new findings by using data from other states, as well as data on actual suicide rates.

The authors add: Our paper examines machine learning approaches applied to a large dataset of adolescent questionnaires, in order to predict suicidal thoughts and behaviors from their answers. We find strong predictive accuracy in identifying those at risk and analyze our model with recent advances in ML interpretability. We found that factors that strongly influence the model include bullying and harassment, as expected, but also aspects of their family life, such as being in a family with yelling and/or serious arguments.We hope that this study can provide insight to inform early prevention efforts.

Computational simulation/modeling

People

Predicting suicidal thoughts and behavior among adolescents using the risk and protective factor framework: A large-scale machine learning approach

3-Nov-2021

The authors have declared that no competing interests exist.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Read the rest here:
Machine learning can provide strong predictive accuracy for identifying adolescents that have experienced suicidal thoughts and behavior - EurekAlert

Machine Learning Approach Takes MSK Researchers Beyond Known Method to Predict Immunotherapy Response – On Cancer – Memorial Sloan Kettering

How can oncologists better predict who will benefit from a widely used class of immunotherapy drugs called checkpoint inhibitors?

In the precision medicine era of cancer care, its a question that has only increased in relevance. To answer it, Luc Morris, a physician-scientist and research laboratory head, together with several colleagues at Memorial Sloan Kettering Cancer Center, are looking beyond a known method to predict immunotherapy response.

Tumor mutational burden, or TMB, refers to the number of mutations a tumor has. High TMB means there are a lot of mutations. Low TMB means there are not many mutations. In the past five years, its been well established that tumors that have high TMB tend to respond better to checkpoint inhibitor therapy compared with tumors that have low TMB. Because checkpoint inhibitors only work in a fraction of people with cancer, the ability to predict response like with TMB is crucial. While TMB can be used to guide treatment decisions for certain patients with cancer for example, the checkpoint inhibitor pembrolizumab (Keytruda) is FDA approved for all tumors with high TMB it remains a crude predictor by itself, according to Dr. Morris.

We know that TMB provides some value in predicting immunotherapy response, but we also know that it is not a perfect predictor. It has limited value in isolation, says Dr. Morris, a senior author on the study, which published November 1, 2021, in Nature Biotechnology.

Oncologists will consider many factors when deciding on the best treatment for a patient with cancer TMB is only one, he says. For example, a melanoma tumor with low TMB may still have a very good chance of responding, just as a breast tumor with high TMB might have a lower chance of responding. We recognize that we need more predictive tools besides just TMB.

The studys co-first authors were Diego Chowell and Steve Yoo, research fellows in the lab of Timothy Chan at MSK, and Cristina Valero, a research fellow inthe Morris Labat MSK. Diego Chowell is currently an assistant professor in the Icahn School of Medicine at Mount Sinai.Nils Weinhold, an MSK cancer researcher and computational biologist, led the study as a co-senior author together with Dr. Chan and Dr. Morris. (Dr. Chan, whose lab first reported the importance of TMB in cancer immunotherapy in 2014, moved to the Cleveland Clinic in 2020.)

The limited value in isolation of TMB was one of the reasons why Dr. Morris and fellow investigators wanted to go beyond the biomarker in their latest analysis, he says. Another reason that Dr. Morris undertook this research was to learn more about a blood marker called neutrophil-to-lymphocyte ratio (NLR). Recent MSK research showed that NLR, especially when combined with TMB and other information such as patient blood markers could improve the ability to predict tumor immunotherapy response.

That opened the door for us to say: Why dont we just gather all of the variables that either have been shown to have predictive value, or that we think might possibly have predictive value, and put them into a machine learning algorithm and see how well we can predict outcomes with a larger pool of information, Dr. Morris says.

The team used a model that integrated 16 genomic, molecular, demographic, and clinical features, including TMB and NLR. By taking a machine learning approach, the investigators would be able to determine which combination of variables had the highest predictive power.

Using this large set of clinical and genomic data from patients treated at MSK, we trained a machine learning model that incorporated a number of different pieces of data, Dr. Morris explains.

Oncologists will consider many factors when deciding on the best treatment for a patient with cancer TMB is only one.

The investigators analyzed the variables in a group of 1,479 patients who were treated with immunotherapy: PD-1/PD-L1 inhibitor immunotherapy, CTLA-4 inhibitor immunotherapy, or a combination of both. Most patients (1,070) did not respond. The group included patients with 16 different types of cancer, of which non-small cell lung cancer and melanoma were the most prevalent. Investigators analyzed patients tumors using MSK-IMPACTTM, a powerful tool that provides detailed information about a tumors mutations.

MSK-IMPACT is an incredible resource for us, both as oncologists treating patients and as scientists trying to understand cancer, says Dr. Morris. For this study, we had a wealth of genomic data for these patients who were treated at MSK, to integrate with clinical data and blood test data.

The results reaffirmed TMBs relevance as a predictor of immunotherapy response; when the variables were studied individually, TMB was associated with the greatest effect of the 16 individual factors.

The next strongest predictors of response to immunotherapy were prior receipt of chemotherapy, albumin levels in the blood, and NLR.

Although each of these four measures could predict immunotherapy response, MSK researchers found that the 16-feature model more accurately predicted response than any one of the individual factors studied alone. Whats more, the 16-feature model was also able to better forecast survival differences among patients who did respond to immune checkpoint blockade and those who did not, further supporting the 16-feature approach over one involving fewer features. Cumulatively, the findings indicate that clinicians can do better than TMB alone by including other available pieces of information about the patient or the tumor genetics, Dr. Morris says.

Importantly, the model also takes into account TMBs varying degrees of predictive value across cancer types, Dr. Morris adds.

Although the predictive value of TMB varies quite a bit across different cancer types, the [16-feature] model had good predictive ability across all cancer types, he says. This is important because TMB is less predictive for some malignancies than for others, and for some types of cancer, it has no value at all. For example, the predictive value of elevated TMB is well established in melanoma and non-small cell lung cancer. In breast and prostate cancers, though, TMB has not been found to accurately predict immunotherapy response.

Broad use is part of Dr. Morris and his colleagues aim: This is a very good predictive biomarker based on genetic data from tumor sequencing, but our next research goal will be to try to determine how much value we can glean from a simpler model that maybe could be more widely implemented around the world.

Key Takeaways

Continue reading here:
Machine Learning Approach Takes MSK Researchers Beyond Known Method to Predict Immunotherapy Response - On Cancer - Memorial Sloan Kettering

Psychologists use machine learning algorithm to pinpoint top predictors of cheating in a relationship – PsyPost

According to a study published in the Journal of Sex Research, relationship characteristics like relationship satisfaction, relationship length, and romantic love are among the top predictors of cheating within a relationship. The researchers used a machine learning algorithm to pinpoint the top predictors of infidelity among over 95 different variables.

While a host of studies have investigated predictors of infidelity, the research has largely revealed mixed and often contradictory findings. Study authors Laura M. Vowels and her colleagues aimed to improve on these inconsistencies by using machine learning models. This approach would allow them to compare the relative predictability of various relationship factors within the same analyses.

The research topic was actually suggested by my co-author, Dr. Kristen Mark, who was interested in understanding predictors of infidelity better. She has previously published several articles on infidelity and is interested in the topic, explained Vowels, a principal researcher forBlueheart.ioand postdoctoral researcher at the University of Lausanne.

Vowels and her team pooled data from two different studies. The first data set came from a study of 891 adults, the majority of whom were married or cohabitating with a partner (63%). Around 54% of the sample identified as straight, 21% identified as bisexual, 11% identified as gay, and 7% identified as lesbian. A second data set was collected from both members of 202 mixed-sex couples who had been together for an average of 9 years, the majority of whom were straight (93%).

Data from the two studies included many of the same variables such as demographic measures like age, race, sexual orientation, and education, in addition to assessments of participants sexual behavior, sexual satisfaction, relationship satisfaction, and attachment styles. Both studies also included a measure of in-person infidelity (having interacted sexually with someone other than ones current partner) and online infidelity (having interacted sexually with someone other than ones current partner on the internet).

Using machine learning techniques, the researchers analyzed the data sets together first for all respondents and then separately for men and women. They then identified the top ten predictors for in-person cheating and for online cheating. Across both samples and among both men and women, higher relationship satisfaction predicted a lower likelihood of in-person cheating. By contrast, higher desire for solo sexual activity, higher desire for sex with ones partner, and being in a longer relationship predicted a higher likelihood of in-person cheating. In the second data set only, greater sexual satisfaction and romantic love predicted a lower likelihood of in-person infidelity.

When it came to online cheating, greater sexual desire and being in a longer relationship predicted a higher likelihood of cheating. Never having had anal sex with ones current partner decreased the likelihood of cheating online a finding the authors say likely reflects more conservative attitudes toward sexuality. In the second data set only, higher relationship and sexual satisfaction also predicted a lower likelihood of cheating.

Overall, I would say that there isnt one specific thing that would predict infidelity. However, relationship related variables were more predictive of infidelity compared to individual variables like personality. Therefore, preventing infidelity might be more successful by maintaining a good and healthy relationship rather than thinking about specific characteristics of the person, Vowels told PsyPost.

Consistent with previous studies, relationship characteristics like romantic love and sexual satisfaction surfaced as top predictors of infidelity across both samples. The researchers say this suggests that the strongest predictors for cheating are often found within the relationship, noting that, addressing relationship issues may buffer against the likelihood of one partner going out of the relationship to seek fulfillment.

These results suggest that intervening in relationships when difficulties first arise may be the best way to prevent future infidelity. Furthermore, because sexual desire was one of the most robust predictors of infidelity, discussing sexual needs and desires and finding ways to meet those needs in relationships may also decrease the risk of infidelity, the authors report.

The researchers emphasize that their analysis involved predicting past experiences of infidelity from an array of present-day assessments. They say that this design may have affected their findings, since couples who had previously dealt with cheating within the relationship may have worked through it by the time they completed the survey.

The study was exploratory in nature and didnt include all the potential predictors, Vowels explained. It also predicted infidelity in the past rather than current or future infidelity, so there are certain elements like relationship satisfaction that might have changed since the infidelity occurred. I think in the future it would be useful to look into other variables and also look at recent infidelity because that would make the measure of infidelity more reliable.

The study, Is Infidelity Predictable? Using Explainable Machine Learning to Identify the Most Important Predictors of Infidelity, was authored by Laura M. Vowels, Matthew J. Vowels, and Kristen P. Mark.

Link:
Psychologists use machine learning algorithm to pinpoint top predictors of cheating in a relationship - PsyPost