Archive for the ‘Machine Learning’ Category

Are We Overly Infatuated With Deep Learning? – Forbes

Deep Learning

One of the factors often credited for this latest boom in artificial intelligence (AI) investment, research, and related cognitive technologies, is the emergence of deep learning neural networks as an evolution of machine algorithms, as well as the corresponding large volume of big data and computing power that makes deep learning a practical reality. While deep learning has been extremely popular and has shown real ability to solve many machine learning problems, deep learning is just one approach to machine learning (ML), that while having proven much capability across a wide range of problem areas, is still just one of many practical approaches. Increasingly, were starting to see news and research showing the limits of deep learning capabilities, as well as some of the downsides to the deep learning approach. So are peoples enthusiasm of AI tied to their enthusiasm of deep learning, and is deep learning really able to deliver on many of its promises?

The Origins of Deep Learning

AI researchers have struggled to understand how the brain learns from the very beginnings of the development of the field of artificial intelligence. It comes as no surprise that since the brain is primarily a collection of interconnected neurons, AI researchers sought to recreate the way the brain is structured through artificial neurons, and connections of those neurons in artificial neural networks. All the way back in 1940, Walter Pitts and Warren McCulloch built the first thresholded logic unit that was an attempt to mimic the way biological neurons worked. The Pitts and McCulloch model was just a proof of concept, but Frank Rosenblatt picked up on the idea in 1957 with the development of the Perceptron that took the concept to its logical extent. While primitive by todays standards, the Perceptron was still capable of remarkable feats - being able to recognize written numbers and letters, and even distinguish male from female faces. That was over 60 years ago!

Rosenblatt was so enthusiastic in 1959 about the Perceptrons promises that he remarked at the time that the perceptron is the embryo of an electronic computer that [we expect] will be able to walk, talk, see, write, reproduce itself and be conscious of its existence. Sound familiar? However, the enthusiasm didnt last. AI researcher Marvin Minsky noted how sensitive the perceptron was to small changes in the images, and also how easily it could be fooled. Maybe the perceptron wasnt really that smart at all. Minsky and AI researcher peer Seymour Papert basically took apart the whole perceptron idea in their Perceptrons book, and made the claim that perceptrons, and neural networks like it, are fundamentally flawed in their inability to handle certain kinds of problems notably, non-linear functions. That is to say, it was easy to train a neural network like a perceptron to put data into classifications, such as male/female, or types of numbers. For these simple neural networks, you can graph a bunch of data and draw a line and say things on one side of the line are in one category and things on the other side of the line are in a different category, thereby classifying them. But theres a whole bunch of problems where you cant draw lines like this, such as speech recognition or many forms of decision-making. These are nonlinear functions, which Minsky and Papert proved perceptrons incapable of solving.

During this period, while neural network approaches to ML settled to become an afterthought in AI, other approaches to ML were in the limelight including knowledge graphs, decision trees, genetic algorithms, similarity models, and other methods. In fact, during this period, IBMs DeepBlue purpose-built AI computer defeated Gary Kasparov in a chess match, the first computer to do so, using a brute-force alpha-beta search algorithm (so-called Good Old-Fashioned AI [GOFAI]) rather than new-fangled deep learning approaches. Yet, even this approach to learning didnt go far, as some said that this system wasnt even intelligent at all.

Yet, the neural network story doesnt end here. In 1986, AI researcher Geoff Hinton, along with David Rumelhart and Ronald Williams, published a research paper entitled Learning representations by back-propagating errors. In this paper, Hinton and crew detailed how you can use many hidden layers of neurons to get around the problems faced by perceptrons. With sufficient data and computing power, these layers can be calculated to identify specific features in the data sets they can classify on, and as a group, could learn nonlinear functions, something known as the universal approximation theorem. The approach works by backpropagating errors from higher layers of the network to lower ones (backprop), expediting training. Now, if you have enough layers, enough data to train those layers, and sufficient computing power to calculate all the interconnections, you can train a neural network to identify and classify almost anything. Researcher Yann Lecun developed LeNet-5 at AT&T Bell Labs in 1998, recognizing handwritten images on checks using an iteration of this approach known as Convolutional Neural Networks (CNNs), and researchers Yoshua Bengio and Jrgen Schmidhube further advanced the field.

Yet, just as things go in AI, research halted when these early neural networks couldnt scale. Surprisingly very little development happened until 2006, when Hinton re-emerged onto the scene with the ideas of unsupervised pre-training and deep belief nets. The idea here is to have a simple two-layer network whose parameters are trained in an unsupervised way, and then stack new layers on top of it, just training that layers parameters. Repeat for dozens, hundreds, even thousands of layers. Eventually you get a deep network with many layers that can learn and understand something complex. This is what deep learning is all about: using lots of layers of trained neural nets to learn just about anything, at least within certain constraints.

In 2010, Stanford researcher Fei-Fei Li published the release of ImageNet, a large database of millions of labeled images. The images were labeled with a hierarchy of classifications, such as animal or vehicle, down to very granular levels, such as husky or trimaran. This ImageNet database was paired with an annual competition called the Large Scale Visual Recognition Challenge (LSVRC) to see which computer vision system had the lowest number of classification and recognition errors. In 2012, Geoff Hinton, Alex Krizhevsky, and Ilya Sutskever, submitted their AlexNet entry that had almost half the number of errors as all previous winning entries. What made their approach win was that they moved from using ordinary computers with CPUs, to specialized graphical processing units (GPUs) that could train much larger models in reasonable amounts of time. They also introduced now-standard deep learning methods such as dropout to reduce a problem called overfitting (when the network is trained too tightly on the example data and cant generalize to broader data), and something called the rectified linear activation unit (ReLU) to speed training. After the success of their competition, it seems everyone took notice, and Deep Learning was off to the races.

Deep Learnings Shortcomings

The fuel that keeps the Deep Learning fires roaring is data and compute power. Specifically, large volumes of well-labeled data sets are needed to train Deep Learning networks. The more layers, the better the learning power, but to have layers you need to have data that is already well labeled to train those layers. Since deep neural networks are primarily a bunch of calculations that have to all be done at the same time, you need a lot of raw computing power, and specifically numerical computing power. Imagine youre tuning a million knobs at the same time to find the optimal combination that will make the system learn based on millions of pieces of data that are being fed into the system. This is why neural networks in the 1950s were not possible, but today they are. Today we finally have lots of data and lots of computing power to handle that data.

Deep learning is being applied successfully in a wide range of situations, such as natural language processing, computer vision, machine translation, bioinformatics, gaming, and many other applications where classification, pattern matching, and the use of this automatically tuned deep neural network approach works well. However, these same advantages have a number of disadvantages.

The most notable of these disadvantages is that since deep learning consists of many layers, each with many interconnected nodes, each configured with different weights and other parameters theres no way to inspect a deep learning network and understand how any particular decision, clustering, or classification is actually done. Its a black box, which means deep learning networks are inherently unexplainable. As many have written on the topic of Explainable AI (XAI), systems that are used to make decisions of significance need to have explainability to satisfy issues of trust, compliance, verifiability, and understandability. While DARPA and others are working on ways to possibly explain deep learning neural networks, the lack of explainability is a significant drawback for many.

The second disadvantage is that deep learning networks are really great at classification and clustering of information, but not really good at other decision-making or learning scenarios. Not every learning situation is one of classifying something in a category or grouping information together into a cluster. Sometimes you have to deduce what to do based on what youve learned before. Deduction and reasoning is not a fort of deep learning networks.

As mentioned earlier, deep learning is also very data and resource hungry. One measure of a neural networks complexity is the number of parameters that need to be learned and tuned. For deep learning neural networks, there can be hundreds of millions of parameters. Training models requires a significant amount of data to adjust these parameters. For example, a speech recognition neural net often requires terabytes of clean, labeled data to train on. The lack of a sufficient, clean, labeled data set would hinder the development of a deep neural net for that problem domain. And even if you have the data, you need to crunch on it to generate the model, which takes a significant amount of time and processing power.

Another challenge of deep learning is that the models produced are very specific to a problem domain. If its trained on a certain dataset of cats, then it will only recognize those cats and cant be used to generalize on animals or be used to identify non-cats. While this is not a problem of only deep learning approaches to machine learning, it can be particularly troublesome when factoring in the overfitting problem mentioned above. Deep learning neural nets can be so tightly constrained (fitted) to the training data that, for example, even small perturbations in the images can lead to wildly inaccurate classifications of images. There are well known examples of turtles being mis-recognized as guns or polar bears being mis-recognized as other animals due to just small changes in the image data. Clearly if youre using this network in mission critical situations, those mistakes would be significant.

Machine Learning is not (just) Deep Learning

Enterprises looking at using cognitive technologies in their business need to look at the whole picture. Machine learning is not just one approach, but rather a collection of different approaches of various different types that are applicable in different scenarios. Some machine learning algorithms are very simple, using small amounts of data and an understandable logic or deduction path thats very suitable for particular situations, while others are very complex and use lots of data and processing power to handle more complicated situations. The key thing to realize is that deep learning isnt all of machine learning, let alone AI. Even Geoff Hinton, the Einstein of deep learning is starting to rethink core elements of deep learning and its limitations.

The key for organizations is to understand which machine learning methods are most viable for which problem areas, and how to plan, develop, deploy, and manage that machine learning approach in practice. Since AI use in the enterprise is still continuing to gain adoption, especially these more advanced cognitive approaches, the best practices on how to employ cognitive technologies successfully are still maturing.

More:

Are We Overly Infatuated With Deep Learning? - Forbes

Can machine learning take over the role of investors? – TechHQ

As we dive deeper into the Fourth Industrial Revolution, there is no disputing how technology serves as a catalyst for growth and innovation for many businesses across a range of functions and industries.

But one technology that is steadily gaining prominence across organizations includes machine learning (ML).

In the simplest terms, ML is the science of getting computers to learn and act like humans do without being programmed. It is a form of artificial intelligence (AI) and entails feeding machine data, enabling the computer program to learn autonomously and enhance its accuracy in analyzing data.

The proliferation of technology means AI is now commonplace in our daily lives, with its presence in a panoply of things, such as driverless vehicles, facial recognition devices, and in the customer service industry.

Currently, asset managers are exploring the potential that AI/ML systems can bring to the finance industry; close to 60 percent of managers predict that ML will have a medium-to-large impact across businesses.

MLs ability to analyze large data sets and continuously self-develop through trial and error translates to increased speed and better performance in data analysis for financial firms.

For instance, according to the Harvard Business Review, ML can spot potentially outperforming equities by identifying new patterns in existing data sets and examine the collected responses of CEOs in quarterly earnings calls of the S&P 500 companies for the past 20 years.

Following this, ML can then formulate a review of good and bad stocks, thus providing organizations with valuable insights to drive important business decisions. This data also paves the way for the system to assess the trustworthiness of forecasts from specific company leaders and compare the performance of competitors in the industry.

Besides that, ML also has the capacity to analyze various forms of data, including sound and images. In the past, such formats of information were challenging for computers to analyze, but todays ML algorithms can process images faster and better than humans.

For example, analysts use GPS locations from mobile devices to pattern foot traffic at retail hubs or refer to the point of sale data to trace revenues during major holiday seasons. Hence, data analysts can leverage on this technological advancement to identify trends and new areas for investment.

It is evident that ML is full of potential, but it still has some big shoes to fil if it were to replace the role of an investor.

Nishant Kumar aptly explained this in Bloomberg, Financial data is very noisy, markets are not stationary and powerful tools require deep understanding and talent thats hard to get. One quantitative analyst, or quant, estimates the failure rate in live tests at about 90 percent. Man AHL, a quant unit of Man Group, needed three years of workto gain enough confidence in a machine-learning strategy to devote client money to it. It later extended its use to four of its main money pools.

In other words, human talent and supervision are still essential to developing the right algorithm and in exercising sound investment judgment. After all, the purpose of a machine is to automate repetitive tasks. In this context, ML may seek out correlations of data without understanding their underlying rationale.

One ML expert said, his team spends days evaluating if patterns by ML are sensible, predictive, consistent, and additive. Even if a pattern falls in line with all four criteria, it may not bear much significance in supporting profitable investment decisions.

The bottom line is ML can streamline data analysis steps, but it cannot replace human judgment. Thus, active equity managers should invest in ML systems to remain competitive in this innovate or die era. Financial firms that successfully recruit professionals with the right data skills and sharp investment judgment stands to be at the forefront of the digital economy.

See more here:

Can machine learning take over the role of investors? - TechHQ

Dr. Max Welling on Federated Learning and Bayesian Thinking – Synced

Introduced by Google in 2017, Federated Learning (FL) enables mobile phones to collaboratively learn a shared prediction model while keeping all the training data on the device, decoupling the ability to do machine learning from the need to store the data in the cloud. Two years have passed, and several new research papers have proposed novel systems to boost FL performance. This March for example a team of researchers from Google suggested a scalable production system for FL to enable increasing workload and output through the addition of resources such as compute, storage, bandwidth, etc.

Earlier this month, NeurIPS 2019 in Vancouver hosted the workshop Federated Learning for Data Privacy and Confidentiality,where academic researchers and industry practitioners discussed recent and innovative work in FL, open problems and relevant approaches.

Professor Dr. Max Welling is the research chair in Machine Learning at the University of Amsterdam and VP Technologies at Qualcomm. Welling is known for his research in Bayesian Inference, Generative modeling, Deep Learning, Variational autoencoders, Graph Convolutional Networks.

Below are excerpts from the workshop talk Dr. Welling gave on Ingredients for Bayesian, Privacy Preserving, Distributed Learning, where the professor shares his views on FL, the importance of distributed learning, and the Bayesian aspects of the domain.

The question can be separated in two parts. Why do we need distributed or federated inferencing? Maybe that is easier to answer. We need it because of reliability. If you in a self-driving car, you clearly dont want to rely on a bad connection to the cloud in order to figure out whether you should brake. Latency. If you have your virtual reality glasses on and you have just a little bit of latency youre not going to have a very good user experience. And then theres, of course, privacy, you dont want your data to get off your device. Also compute maybe because its close to where you are, and personalization you want models to be suited for you.

It took a little bit more thinking why distributed learning is so important, especially within a company how are you going to sell something like that? Privacy is the biggest factor here, there are many companies and factories that simply dont want their data to go off site, they dont want to have it go to the cloud. And so you want to do your training in-house. But theres also bandwidth. You know, moving around data is actually very expensive and theres a lot of it. So its much better to keep the data where it is and move the computation to the data. And also, personalization plays a role.

There are many challenges when you want to do this. The data could be extremely heterogeneous, so you could have a completely different distribution on one device than you have on another device. Also, the data sizes could be very different. One device could contain 10 times more data than another device. And the compute could be heterogeneous, you could have small devices with a little bit of compute that now and then or you cant use because the batterys down. There are other bigger servers that you also want to have in your in your distribution of compute devices.

The bandwidth is limited, so you dont want to send huge amounts of even parameters. Lets say we dont move data, but we move parameters. Even then you dont want to move loads and loads of parameters over the channel. So you want to maybe quantize it to see this. I believe Bayesian thinking is going to be very helpful. And again, the data needs to be private so you wouldnt want to send parameters that contain a lot of information about the data.

So first of all, of course, were going to move model parameters, were not going to move data. We have data stored at places and were going to move the algorithm to that data. So basically you get your learning update, maybe privatized, and then you move it back to your central place where youre going to update it.And of course, bandwidth is another challenge that you have to solve.

We have these heterogeneous data sources and we have very variability in the speed in which we can sync these updates. Here I think the Bayesian paradigm is going to come in handy because, for instance, if you have been running an update on a very large dataset, you can shrink your posterior parameters to a very small posterior. Where on another device, you might have much less data, and you might have a very wide posterior distribution for those parameters. Now, how to combine that? You shouldnt average them, its silly. You should do a proper posterior update where the one that has a small peaked posterior has a lot more weight than the one with a very wide posterior. Also uncertainty estimates are important in that aspect.

The other thing is that with Bayesian update, if you have a very wide posterior distribution, then you know that parameter is not going be very important for making predictions. And so if youre going to send that parameter over a channel, you will have to quantize it, especially to save bandwidth. The ones that are very uncertain anyway you can quantize at a very coarse level, and the ones which have a very peak posterior need to be encoded very precisely, and so you need much higher resolution for that. So also there, the Bayesian paradigm is going to be helpful.

In terms of privacy, there is this interesting result that if you have an uncertain parameter and you draw a sample from that posterior parameter, then that single sample is more private than providing the whole distribution. Theres results that show that you can get a certain level of differential privacy by just drawing a single sample from that posterior distribution. So effectively youre adding noise to your parameter, making it more private. Again, Bayesian thinking is synergistic with this sort of Bayesian federated learning scenario.

We can do MCMC (Markov chain Monte Carlo) and variational based distributed learning. And as theres advantages to do that because it makes the updates more principled and you can combine things which, one of them might be based on a lot more data than another one.

Then we have private and Bayesian to privatize the updates of a variational Bayesian model. Many people have worked on many other of these intersections, so we have deep learning models which have been privatized, we have quantization, which is important if you want to send your parameters over a noisy channel. And its nice because the more you quantize, the more private things become. You can compute the level of quantization from your Bayesian posterior, so all these things are very nicely tied together.

People have looked at the relation between quantized models and Bayesian models how can you use Bayesian estimates to quantized better? People have looked at quantized versus deep to make your deep neural network run faster on a mobile phone you want to quantize it. People have looked at distributed versus deep, distributed deep learning. So many of these intersections have actually been researched, but it hasnt been put together. This is what I want to call for. We can try to put these things together and at the core of all of this is Bayesian thinking, we can use it to execute better on this program.

Journalist: Fangyu Cai | Editor: Michael Sarazen

Like Loading...

See the article here:

Dr. Max Welling on Federated Learning and Bayesian Thinking - Synced

Theres No Such Thing As The Machine Learning Platform – Forbes

In the past few years, you might have noticed the increasing pace at which vendors are rolling out platforms that serve the AI ecosystem, namely addressing data science and machine learning (ML) needs. The Data Science Platform and Machine Learning Platform are at the front lines of the battle for the mind share and wallets of data scientists, ML project managers, and others that manage AI projects and initiatives. If youre a major technology vendor and you dont have some sort of big play in the AI space, then you risk rapidly becoming irrelevant. But what exactly are these platforms and why is there such an intense market share grab going on?

The core of this insight is the realization that ML and data science projects are nothing like typical application or hardware development projects. Whereas in the past hardware and software development aimed to focus on the functionality of systems or applications, data science and ML projects are really about managing data, continuously evolving learning gleaned from data, and the evolution of data models based on constant iteration. Typical development processes and platforms simply dont work from a data-centric perspective.

It should be no surprise then that technology vendors of all sizes are focused on developing platforms that data scientists and ML project managers will depend on to develop, run, operate, and manage their ongoing data models for the enterprise. To these vendors, the ML platform of the future is like the operating system or cloud environment or mobile development platform of the past and present. If you can dominate market share for data science / ML platforms, you will reap rewards for decades to come. As a result, everyone with a dog in this fight is fighting to own a piece of this market.

However, what does a Machine Learning platform look like? How is it the same or different than a Data Science platform? What are the core requirements for ML Platforms, and how do they differ from more general data science platforms? Who are the users of these platforms, and what do they really want? Lets dive deeper.

What is the Data Science Platform?

Data scientists are tasked with wrangling useful information from a sea of data and translating business and operational informational needs into the language of data and math. Data scientists need to be masters of statistics, probability, mathematics, and algorithms that help to glean useful insights from huge piles of information. A data scientist creates data hypothesis, runs tests and analysis of the data, and then translates their results for someone else in the organization to easily view and understand. So it follows that a pure data science platform would meet the needs of helping craft data models, determining the best fit of information to a hypothesis, testing that hypothesis, facilitating collaboration amongst teams of data scientists, and helping to manage and evolve the data model as information continues to change.

Furthermore, data scientists dont focus their work in code-centric Integrated Development Environments (IDEs), but rather in notebooks. First popularized by academically-oriented math-centric platforms like Mathematica and Matlab, but now prominent in the Python, R, and SAS communities, notebooks are used to document data research and simplify reproducibility of results by allowing the notebook to run on different source data. The best notebooks are shared, collaborative environments where groups of data scientists can work together and iterate models over constantly evolving data sets. While notebooks dont make great environments for developing code, they make great environments to collaborate, explore, and visualize data. Indeed, the best notebooks are used by data scientists to quickly explore large data sets, assuming sufficient access to clean data.

However, data scientists cant perform their jobs effectively without access to large volumes of clean data. Extracting, cleaning, and moving data is not really the role of a data scientist, but rather that of a data engineer. Data engineers are challenged with the task of taking data from a wide range of systems in structured and unstructured formats, and data which is usually not clean, with missing fields, mismatched data types, and other data-related issues. In this way, the role of a data engineer is an engineer who designs, builds and arranges data. Good data science platforms also enable data scientists to easily leverage compute power as their needs grow. Instead of copying data sets to a local computer to work on them, platforms allow data scientists to easily access compute power and data sets with minimal hassle. A data science platform is challenged with the needs to provide these data engineering capabilities as well. As such, a practical data science platform will have elements of data science capabilities and necessary data engineering functionality.

What is the Machine Learning Platform?

We just spent several paragraphs talking about data science platforms and not even once mentioned AI or ML. Of course, the overlap is the use of data science techniques and machine learning algorithms applied to the large sets of data for the development of machine learning models. The tools that data scientists use on a daily basis have significant overlap with the tools used by ML-focused scientists and engineers. However, these tools arent the same, because the needs of ML scientists and engineers are not the same as more general data scientists and engineers.

Rather than just focusing on notebooks and the ecosystem to manage and work collaboratively with others on those notebooks, those tasked with managing ML projects need access to the range of ML-specific algorithms, libraries, and infrastructure to train those algorithms over large and evolving datasets. An ideal ML platforms helps ML engineers, data scientists, and engineers discover which machine learning approaches work best, how to tune hyperparameters, deploy compute-intensive ML training across on-premise or cloud-based CPU, GPU, and/or TPU clusters, and provide an ecosystem for managing and monitoring both unsupervised as well as supervised modes of training.

Clearly a collaborative, interactive, visual system for developing and managing ML models in a data science platform is necessary, but its not sufficient for an ML platform. As hinted above, one of the more challenging parts of making ML systems work is the setting and tuning of hyperparameters. The whole concept of a machine learning model is that it requires various parameters to be learned from the data. Basically, what machine learning is actually learning are the parameters of the data, and fitting new data to that learned model. Hyperparameters are configurable data values that are set prior to training an ML model that cant be learned from data. These hyperparameters indicate various factors such as complexity, speed of learning, and more. Different ML algorithms require different hyperparameters, and some dont need any at all. ML platforms help with the discovery, setting, and management of hyperparameters, among other things including algorithm selection and comparison that non-ML specific data science platforms dont provide.

The different needs of big data, ML engineering, model management, operationalization

At the end of the day, ML project managers simply want tools to make their jobs more efficient and effective. But not all ML projects are the same. Some are focused on conversational systems, while others are focused on recognition or predictive analytics. Yet others are focused on reinforcement learning or autonomous systems. Furthermore, these models can be deployed (or operationalized) in various different ways. Some models might reside in the cloud or on-premise servers while others are deployed to edge devices or offline batch modes. These differences in ML application, deployment, and needs between data scientists, engineers, and ML developers makes the concept of a single ML platform not particularly feasible. It would be a jack of all trades and master of none.''

As such, we see four different platforms emerging. One focused on the needs of data scientists and model builders, another focused on big data management and data engineering, yet another focused on model scaffolding and building systems to interact with models, and a fourth focused on managing the model lifecycle - ML Ops. The winners will focus on building out capabilities for each of these parts.

The Four Environments of AI (Source: Cognilytica)

The winners in the data science platform race will be the ones that simplify ML model creation, training, and iteration. They will make it quick and easy for companies to move from dumb unintelligent systems to ones that leverage the power of ML to solve problems that previously could not be addressed by machines. Data science platforms that dont enable ML capabilities will be relegated to non-ML data science tasks. Likewise, those big data platforms that inherently enable data engineering capabilities will be winners. Similarly, application development tools will need to treat machine learning models as first-class participants in their lifecycle just like any other form of technology asset. Finally, the space of ML operations (ML Ops) is just now emerging and will no doubt be big news in the next few years.

When a vendor tells you they have an AI or ML platform, the right response is to say which one?. As you can see, there isnt just one ML platform, but rather different ones that serve very different needs. Make sure you dont get caught up in the marketing hype of some of these vendors with what they say they have with what they actually have.

View original post here:

Theres No Such Thing As The Machine Learning Platform - Forbes

Machine learning results: pay attention to what you don’t see – STAT

Even as machine learning and artificial intelligence are drawing substantial attention in health care, overzealousness for these technologies has created an environment in which other critical aspects of the research are often overlooked.

Theres no question that the increasing availability of large data sources and off-the-shelf machine learning tools offer tremendous resources to researchers. Yet a lack of understanding about the limitations of both the data and the algorithms can lead to erroneous or unsupported conclusions.

Given that machine learning in the health domain can have a direct impact on peoples lives, broad claims emerging from this kind of research should not be embraced without serious vetting. Whether conducting health care research or reading about it, make sure to consider what you dont see in the data and analyses.

advertisement

One key question to ask is: Whose information is in the data and what do these data reflect?

Common forms of electronic health data, such as billing claims and clinical records, contain information only on individuals who have encounters with the health care system. But many individuals who are sick dont or cant see a doctor or other health care provider and so are invisible in these databases. This may be true for individuals with lower incomes or those who live in rural communities with rising hospital closures. As University of Toronto machine learning professor Marzyeh Ghassemi said earlier this year:

Even among patients who do visit their doctors, health conditions are not consistently recorded. Health data also reflect structural racism, which has devastating consequences.

Data from randomized trials are not immune to these issues. As a ProPublica report demonstrated, black and Native American patients are drastically underrepresented in cancer clinical trials. This is important to underscore given that randomized trials are frequently highlighted as superior in discussions about machine learning work that leverages nonrandomized electronic health data.

In interpreting results from machine learning research, its important to be aware that the patients in a study often do not depict the population we wish to make conclusions about and that the information collected is far from complete.

It has become commonplace to evaluate machine learning algorithms based on overall measures like accuracy or area under the curve. However, one evaluation metric cannot capture the complexity of performance. Be wary of research that claims to be ready for translation into clinical practice but only presents a leader board of tools that are ranked based on a single metric.

As an extreme illustration, an algorithm designed to predict a rare condition found in only 1% of the population can be extremely accurate by labeling all individuals as not having the condition. This tool is 99% accurate, but completely useless. Yet, it may outperform other algorithms if accuracy is considered in isolation.

Whats more, algorithms are frequently not evaluated based on multiple hold-out samples in cross-validation. Using only a single hold-out sample, which is done in many published papers, often leads to higher variance and misleading metric performance.

Beyond examining multiple overall metrics of performance for machine learning, we should also assess how tools perform in subgroups as a step toward avoiding bias and discrimination. For example, artificial intelligence-based facial recognition software performed poorly when analyzing darker-skinned women. Many measures of algorithmic fairness center on performance in subgroups.

Bias in algorithms has largely not been a focus in health care research. That needs to change. A new study found substantial racial bias against black patients in a commercial algorithm used by many hospitals and other health care systems. Other work developed algorithms to improve fairness for subgroups in health care spending formulas.

Subjective decision-making pervades research. Who decides what the research question will be, which methods will be applied to answering it, and how the techniques will be assessed all matter. Diverse teams are needed not just because they yield better results. As Rediet Abebe, a junior fellow of Harvards Society of Fellows, has written, In both private enterprise and the public sector, research must be reflective of the society were serving.

The influx of so-called digital data thats available through search engines and social media may be one resource for understanding the health of individuals who do not have encounters with the health care system. There have, however, been notable failures with these data. But there are also promising advances using online search queries at scale where traditional approaches like conducting surveys would be infeasible.

Increasingly granular data are now becoming available thanks to wearable technologies such as Fitbit trackers and Apple Watches. Researchers are actively developing and applying techniques to summarize the information gleaned from these devices for prevention efforts.

Much of the published clinical machine learning research, however, focuses on predicting outcomes or discovering patterns. Although machine learning for causal questions in health and biomedicine is a rapidly growing area, we dont see a lot of this work yet because it is new. Recent examples of it include the comparative effectiveness of feeding interventions in a pediatric intensive care unit and the effectiveness of different types of drug-eluting coronary artery stents.

Understanding how the data were collected and using appropriate evaluation metrics will also be crucial for studies that incorporate novel data sources and those attempting to establish causality.

In our drive to improve health with (and without) machine learning, we must not forget to look for what is missing: What information do we not have about the underlying health care system? Why might an individual or a code be unobserved? What subgroups have not been prioritized? Who is on the research team?

Giving these questions a place at the table will be the only way to see the whole picture.

Sherri Rose, Ph.D., is associate professor of health care policy at Harvard Medical School and co-author of the first book on machine learning for causal inference, Targeted Learning (Springer, 2011).

See the article here:

Machine learning results: pay attention to what you don't see - STAT