Archive for February, 2020

What is machine learning? – Brookings

In the summer of 1955, while planning a now famous workshop at Dartmouth College, John McCarthy coined the term artificial intelligence to describe a new field of computer science. Rather than writing programs that tell a computer how to carry out a specific task, McCarthy pledged that he and his colleagues would instead pursue algorithms that could teach themselves how to do so. The goal was to create computers that could observe the world and then make decisions based on those observationsto demonstrate, that is, an innate intelligence.

The question was how to achieve that goal. Early efforts focused primarily on whats known as symbolic AI, which tried to teach computers how to reason abstractly. But today the dominant approach by far is machine learning, which relies on statistics instead. Although the approach dates back to the 1950sone of the attendees at Dartmouth, Arthur Samuels, was the first to describe his work as machine learningit wasnt until the past few decades that computers had enough storage and processing power for the approach to work well. The rise of cloud computing and customized chips has powered breakthrough after breakthrough, with research centers like OpenAI or DeepMind announcing stunning new advances seemingly every week.

Machine learning is now so popular that it has effectively become synonymous with artificial intelligence itself. As a result, its not possible to tease out the implications of AI without understanding how machine learning works.

The extraordinary success of machine learning has made it the default method of choice for AI researchers and experts. Indeed, machine learning is now so popular that it has effectively become synonymous with artificial intelligence itself. As a result, its not possible to tease out the implications of AI without understanding how machine learning worksas well as how it doesnt.

The core insight of machine learning is that much of what we recognize as intelligence hinges on probability rather than reason or logic.If you think about it long enough, this makes sense. When we look at a picture of someone, our brains unconsciously estimate how likely it is that we have seen their face before. When we drive to the store, we estimate which route is most likely to get us there the fastest. When we play a board game, we estimate which move is most likely to lead to victory. Recognizing someone, planning a trip, plotting a strategyeach of these tasks demonstrate intelligence. But rather than hinging primarily on our ability to reason abstractly or think grand thoughts, they depend first and foremost on our ability to accurately assess how likely something is. We just dont always realize that thats what were doing.

Back in the 1950s, though, McCarthy and his colleagues did realize it. And they understood something else too: Computers should be very good at computing probabilities. Transistors had only just been invented, and had yet to fully supplant vacuum tube technology. But it was clear even then that with enough data, digital computers would be ideal for estimating a given probability. Unfortunately for the first AI researchers, their timing was a bit off. But their intuition was spot onand much of what we now know as AI is owed to it. When Facebook recognizes your face in a photo, or Amazon Echo understands your question, theyre relying on an insight that is over sixty years old.

The core insight of machine learning is that much of what we recognize as intelligence hinges on probability rather than reason or logic.

The machine learning algorithm that Facebook, Google, and others all use is something called a deep neural network. Building on the prior work of Warren McCullough and Walter Pitts, Frank Rosenblatt coded one of the first working neural networks in the late 1950s. Although todays neural networks are a bit more complex, the main idea is still the same: The best way to estimate a given probability is to break the problem down into discrete, bite-sized chunks of information, or what McCullough and Pitts termed a neuron. Their hunch was that if you linked a bunch of neurons together in the right way, loosely akin to how neurons are linked in the brain, then you should be able to build models that can learn a variety of tasks.

To get a feel for how neural networks work, imagine you wanted to build an algorithm to detect whether an image contained a human face. A basic deep neural network would have several layers of thousands of neurons each. In the first layer, each neuron might learn to look for one basic shape, like a curve or a line. In the second layer, each neuron would look at the first layer, and learn to see whether the lines and curves it detects ever make up more advanced shapes, like a corner or a circle. In the third layer, neurons would look for even more advanced patterns, like a dark circle inside a white circle, as happens in the human eye. In the final layer, each neuron would learn to look for still more advanced shapes, such as two eyes and a nose. Based on what the neurons in the final layer say, the algorithm will then estimate how likely it is that an image contains a face. (For an illustration of how deep neural networks learn hierarchical feature representations, see here.)

The magic of deep learning is that the algorithm learns to do all this on its own. The only thing a researcher does is feed the algorithm a bunch of images and specify a few key parameters, like how many layers to use and how many neurons should be in each layer, and the algorithm does the rest. At each pass through the data, the algorithm makes an educated guess about what type of information each neuron should look for, and then updates each guess based on how well it works. As the algorithm does this over and over, eventually it learns what information to look for, and in what order, to best estimate, say, how likely an image is to contain a face.

Whats remarkable about deep learning is just how flexible it is. Although there are other prominent machine learning algorithms tooalbeit with clunkier names, like gradient boosting machinesnone are nearly so effective across nearly so many domains. With enough data, deep neural networks will almost always do the best job at estimating how likely something is. As a result, theyre often also the best at mimicking intelligence too.

Yet as with machine learning more generally, deep neural networks are not without limitations. To build their models, machine learning algorithms rely entirely on training data, which means both that they will reproduce the biases in that data, and that they will struggle with cases that are not found in that data. Further, machine learning algorithms can also be gamed. If an algorithm is reverse engineered, it can be deliberately tricked into thinking that, say, a stop sign is actually a person. Some of these limitations may be resolved with better data and algorithms, but others may be endemic to statistical modeling.

To glimpse how the strengths and weaknesses of AI will play out in the real-world, it is necessary to describe the current state of the art across a variety of intelligent tasks. Below, I look at the situation in regard to speech recognition, image recognition, robotics, and reasoning in general.

Ever since digital computers were invented, linguists and computer scientists have sought to use them to recognize speech and text. Known as natural language processing, or NLP, the field once focused on hardwiring syntax and grammar into code. However, over the past several decades, machine learning has largely surpassed rule-based systems, thanks to everything from support vector machines to hidden markov models to, most recently, deep learning. Apples Siri, Amazons Alexa, and Googles Duplex all rely heavily on deep learning to recognize speech or text, and represent the cutting-edge of the field.

When several leading researchers recently set a deep learning algorithm loose on Amazon reviews, they were surprised to learn that the algorithm had not only taught itself grammar and syntax, but a sentiment classifier too.

The specific deep learning algorithms at play have varied somewhat. Recurrent neural networks powered many of the initial deep learning breakthroughs, while hierarchical attention networks are responsible for more recent ones. What they all share in common, though, is that the higher levels of a deep learning network effectively learn grammar and syntax on their own. In fact, when several leading researchers recently set a deep learning algorithm loose on Amazon reviews, they were surprised to learn that the algorithm had not only taught itself grammar and syntax, but a sentiment classifier too.

Yet for all the success of deep learning at speech recognition, key limitations remain. The most important is that because deep neural networks only ever build probabilistic models, they dont understand language in the way humans do; they can recognize that the sequence of letters k-i-n-g and q-u-e-e-n are statistically related, but they have no innate understanding of what either word means, much less the broader concepts of royalty and gender. As a result, there is likely to be a ceiling to how intelligent speech recognition systems based on deep learning and other probabilistic models can ever be. If we ever build an AI like the one in the movie Her, which was capable of genuine human relationships, it will almost certainly take a breakthrough well beyond what a deep neural network can deliver.

When Rosenblatt first implemented his neural network in 1958, he initially set it loose onimages of dogs and cats. AI researchers have been focused on tackling image recognition ever since. By necessity, much of that time was spent devising algorithms that could detect pre-specified shapes in an image, like edges and polyhedrons, using the limited processing power of early computers. Thanks to modern hardware, however, the field of computer vision is now dominated by deep learning instead. When a Tesla drives safely in autopilot mode, or when Googles new augmented-reality microscope detects cancer in real-time, its because of a deep learning algorithm.

A few stickers on a stop sign can be enough to prevent a deep learning model from recognizing it as such. For image recognition algorithms to reach their full potential, theyll need to become much more robust.

Convolutional neural networks, or CNNs, are the variant of deep learning most responsible for recent advances in computer vision. Developed by Yann LeCun and others, CNNs dont try to understand an entire image all at once, but instead scan it in localized regions, much the way a visual cortex does. LeCuns early CNNs were used to recognize handwritten numbers, but today the most advanced CNNs, such as capsule networks, can recognize complex three-dimensional objects from multiple angles, even those not represented in training data. Meanwhile, generative adversarial networks, the algorithm behind deep fake videos, typically use CNNs not to recognize specific objects in an image, but instead to generate them.

As with speech recognition, cutting-edge image recognition algorithms are not without drawbacks. Most importantly, just as all that NLP algorithms learn are statistical relationships between words, all that computer vision algorithms learn are statistical relationships between pixels. As a result, they can be relatively brittle. A few stickers on a stop sign can be enough to prevent a deep learning model from recognizing it as such. For image recognition algorithms to reach their full potential, theyll need to become much more robust.

What makes our intelligence so powerful is not just that we can understand the world, but that we can interact with it. The same will be true for machines. Computers that can learn to recognize sights and sounds are one thing; those that can learn to identify an object as well as how to manipulate it are another altogether. Yet if image and speech recognition are difficult challenges, touch and motor control are far more so. For all their processing power, computers are still remarkably poor at something as simple as picking up a shirt.

The reason: Picking up an object like a shirt isnt just one task, but several. First you need to recognize a shirt as a shirt. Then you need to estimate how heavy it is, how its mass is distributed, and how much friction its surface has. Based on those guesses, then you need to estimate where to grasp the shirt and how much force to apply at each point of your grip, a task made all the more challenging because the shirts shape and distribution of mass will change as you lift it up. A human does this trivially and easily. But for a computer, the uncertainty in any of those calculations compounds across all of them, making it an exceedingly difficult task.

Initially, programmers tried to solve the problem by writing programs that instructed robotic arms how to carry out each task step by step. However, just as rule-based NLP cant account for all possible permutations of language, there also is no way for rule-based robotics to run through all the possible permutations of how an object might be grasped. By the 1980s, it became increasingly clear that robots would need to learn about the world on their own and develop their own intuitions about how to interact with it. Otherwise, there was no way they would be able to reliably complete basic maneuvers like identifying an object, moving toward it, and picking it up.

The current state of the art is something called deep reinforcement learning. As a crude shorthand, you can think of reinforcement learning as trial and error. If a robotic arm tries a new way of picking up an object and succeeds, it rewards itself; if it drops the object, it punishes itself. The more the arm attempts its task, the better it gets at learning good rules of thumb for how to complete it. Coupled with modern computing, deep reinforcement learning has shown enormous promise. For instance, by simulating a variety of robotic hands across thousands of servers, OpenAI recently taught a real robotic hand how to manipulate a cube marked with letters.

For all their processing power, computers are still remarkably poor at something as simple as picking up a shirt.

Compared with prior research, OpenAIs breakthrough is tremendously impressive. Yet it also shows the limitations of the field. The hand OpenAI built didnt actually feel the cube at all, but instead relied on a camera. For an object like a cube, which doesnt change shape and can be easily simulated in virtual environments, such an approach can work well. But ultimately, robots will need to rely on more than just eyes. Machines with the dexterity and fine motor skills of a human are still a ways away.

When Arthur Samuels coined the term machine learning, he wasnt researching image or speech recognition, nor was he working on robots. Instead, Samuels was tackling one of his favorite pastimes: checkers. Since the game had far too many potential board moves for a rule-based algorithm to encode them all, Samuels devised an algorithm that could teach itself to efficiently look several moves ahead. The algorithm was noteworthy for working at all, much less being competitive with other humans. But it also anticipated the astonishing breakthroughs of more recent algorithms like AlphaGo and AlphaGo Zero, which have surpassed all human players at Go, widely regarded as the most intellectually demanding board game in the world.

As with robotics, the best strategic AI relies on deep reinforcement learning. In fact, the algorithm that OpenAI used to power its robotic hand also formed the core of its algorithm for playing Dota 2, a multi-player video game. Although motor control and gameplay may seem very different, both involve the same process: making a sequence of moves over time, and then evaluating whether they led to success or failure. Trial and error, it turns out, is as useful for learning to reason about a game as it is for manipulating a cube.

Since the algorithm works only by learning from outcome data, it needs a human to define what the outcome should be. As a result, reinforcement learning is of little use in the many strategic contexts in which the outcome is not always clear.

From Samuels on, the success of computers at board games has posed a puzzle to AI optimists and pessimists alike. If a computer can beat a human at a strategic game like chess, how much can we infer about its ability to reason strategically in other environments? For a long time, the answer was, very little. After all, most board games involve a single player on each side, each with full information about the game, and a clearly preferred outcome. Yet most strategic thinking involves cases where there are multiple players on each side, most or all players have only limited information about what is happening, and the preferred outcome is not clear. For all of AlphaGos brilliance, youll note that Google didnt then promote it to CEO, a role that is inherently collaborative and requires a knack for making decisions with incomplete information.

Fortunately, reinforcement learning researchers have recently made progress on both of those fronts. One team outperformed human players at Texas Hold Em, a poker game where making the most of limited information is key. Meanwhile, OpenAIs Dota 2 player, which coupled reinforcement learning with whats called a Long Short-Term Memory (LSTM) algorithm, has made headlines for learning how to coordinate the behavior of five separate bots so well that they were able to beat a team of professional Dota 2 players. As the algorithms improve, humans will likely have a lot to learn about optimal strategies for cooperation, especially in information-poor environments.This kind of information would be especially valuable for commanders in military settings, who sometimes have to make decisions without having comprehensive information.

Yet theres still one challenge no reinforcement learning algorithm can ever solve. Since the algorithm works only by learning from outcome data, it needs a human to define what the outcome should be. As a result, reinforcement learning is of little use in the many strategic contexts in which the outcome is not always clear. Should corporate strategy prioritize growth or sustainability? Should U.S. foreign policy prioritize security or economic development? No AI will ever be able to answer higher-order strategic reasoning, because, ultimately, those are moral or political questions rather than empirical ones. The Pentagon may lean more heavily on AI in the years to come, but it wont be taking over the situation room and automating complex tradeoffs any time soon.

From autonomous cars to multiplayer games, machine learning algorithms can now approach or exceed human intelligence across a remarkable number of tasks. The breakout success of deep learning in particular has led to breathless speculation about both the imminent doom of humanity and its impending techno-liberation. Not surprisingly, all the hype has led several luminaries in the field, such as Gary Marcus or Judea Pearl, to caution that machine learning is nowhere near as intelligent as it is being presented, or that perhaps we should defer our deepest hopes and fears about AI until it is based on more than mere statistical correlations. Even Geoffrey Hinton, a researcher at Google and one of the godfathers of modern neural networks, has suggested that deep learning alone is unlikely to deliver the level of competence many AI evangelists envision.

Where the long-term implications of AI are concerned, the key question about machine learning is this: How much of human intelligence can be approximated with statistics? If all of it can be, then machine learning may well be all we need to get to a true artificial general intelligence. But its very unclear whether thats the case. As far back as 1969, when Marvin Minsky and Seymour Papert famously argued that neural networks had fundamental limitations, even leading experts in AI have expressed skepticism that machine learning would be enough. Modern skeptics like Marcus and Pearl are only writing the latest chapter in a much older book. And its hard not to find their doubts at least somewhat compelling. The path forward from the deep learning of today, which can mistake a rifle for a helicopter, is by no means obvious.

Where the long-term implications of AI are concerned, the key question about machine learning is this: How much of human intelligence can be approximated with statistics?

Yet the debate over machine learnings long-term ceiling is to some extent beside the point. Even if all research on machine learning were to cease, the state-of-the-art algorithms of today would still have an unprecedented impact. The advances that have already been made in computer vision, speech recognition, robotics, and reasoning will be enough to dramatically reshape our world. Just as happened in the so-called Cambrian explosion, when animals simultaneously evolved the ability to see, hear, and move, the coming decade will see an explosion in applications that combine the ability to recognize what is happening in the world with the ability to move and interact with it. Those applications will transform the global economy and politics in ways we can scarcely imagine today. Policymakers need not wring their hands just yet about how intelligent machine learning may one day become. They will have their hands full responding to how intelligent it already is.

See the original post:
What is machine learning? - Brookings

Machine Learning on AWS

Amazon SageMaker enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. It removes the complexity that gets in the way of successfully implementing machine learning across use cases and industriesfrom running models for real-time fraud detection, to virtually analyzing biological impacts of potential drugs, to predicting stolen-base success in baseball.

Amazon SageMaker Studio: Experience the first fully integrated development environment (IDE) for machine learning with Amazon SageMaker Studio, where you can perform all ML development steps. You can quickly upload data, create and share new notebooks, train and tune ML models, move back and forth between steps to adjust experiments, debug and compare results, and deploy and monitor ML models all in a single visual interface, making you much more productive.

Amazon SageMaker Autopilot: Automatically build, train, and tune models with full visibility and control, using Amazon SageMaker Autopilot. It is the industrys first automated machine learning capability that gives you complete control and visibility into how your models were created and what logic was used in creating these models.

More:
Machine Learning on AWS

Why 2020 will be the Year of Automated Machine Learning – Gigabit Magazine – Technology News, Magazine and Website

As the fuel that powers their ongoing digital transformation efforts, businesses everywhere are looking for ways to derive as much insight as possible from their data. The accompanying increased demand for advanced predictive and prescriptive analytics has, in turn, led to a call for more data scientists proficient with the latest artificial intelligence (AI) and machine learning (ML) tools.

But such highly-skilled data scientists are expensive and in short supply. In fact, theyre such a precious resource that the phenomenon of the citizen data scientist has recently arisen to help close the skills gap. A complementary role, rather than a direct replacement, citizen data scientists lack specific advanced data science expertise. However, they are capable of generating models using state-of-the-art diagnostic and predictive analytics. And this capability is partly due to the advent of accessible new technologies such as automated machine learning (AutoML) that now automate many of the tasks once performed by data scientists.

Algorithms and automation

According to a recent Harvard Business Review article, Organisations have shifted towards amplifying predictive power by coupling big data with complex automated machine learning. AutoML, which uses machine learning to generate better machine learning, is advertised as affording opportunities to democratise machine learning by allowing firms with limited data science expertise to develop analytical pipelines capable of solving sophisticated business problems.

Comprising a set of algorithms that automate the writing of other ML algorithms, AutoML automates the end-to-end process of applying ML to real-world problems. By way of illustration, a standard ML pipeline is made up of the following: data pre-processing, feature extraction, feature selection, feature engineering, algorithm selection, and hyper-parameter tuning. But the considerable expertise and time it takes to implement these steps means theres a high barrier to entry.

AutoML removes some of these constraints. Not only does it significantly reduce the time it would typically take to implement an ML process under human supervision, it can also often improve the accuracy of the model in comparison to hand-crafted models, trained and deployed by humans. In doing so, it offers organisations a gateway into ML, as well as freeing up the time of ML engineers and data practitioners, allowing them to focus on higher-order challenges.

SEE ALSO:

Overcoming scalability problems

The trend for combining ML with Big Data for advanced data analytics began back in 2012, when deep learning became the dominant approach to solving ML problems. This approach heralded the generation of a wealth of new software, tooling, and techniques that altered both the workload and the workflow associated with ML on a large scale. Entirely new ML toolsets, such as TensorFlow and PyTorch were created, and people increasingly began to engage more with graphics processing units (GPUs) to accelerate their work.

Until this point, companies efforts had been hindered by the scalability problems associated with running ML algorithms on huge datasets. Now, though, they were able to overcome these issues. By quickly developing sophisticated internal tooling capable of building world-class AI applications, the BigTech powerhouses soon overtook their Fortune 500 peers when it came to realising the benefits of smarter data-driven decision-making and applications.

Insight, innovation and data-driven decisions

AutoML represents the next stage in MLs evolution, promising to help non-tech companies access the capabilities they need to quickly and cheaply build ML applications.

In 2018, for example, Google launched its Cloud AutoML. Based on Neural Architecture Search (NAS) and transfer learning, it was described by Google executives as having the potential to make AI experts even more productive, advance new fields in AI, and help less-skilled engineers build powerful AI systems they previously only dreamed of.

The one downside to Googles AutoML is that its a proprietary algorithm. There are, however, a number of alternative open-source AutoML libraries such as AutoKeras, developed by researchers at Texas University and used to power the NAS algorithm.

Technological breakthroughs such as these have given companies the capability to easily build production-ready models without the need for expensive human resources. By leveraging AI, ML, and deep learning capabilities, AutoML gives businesses across all industries the opportunity to benefit from data-driven applications powered by statistical models - even when advanced data science expertise is scarce.

With organisations increasingly reliant on civilian data scientists, 2020 is likely to be the year that enterprise adoption of AutoML will start to become mainstream. Its ease of access will compel business leaders to finally open the black box of ML, thereby elevating their knowledge of its processes and capabilities. AI and ML tools and practices will become ever more ingrained in businesses everyday thinking and operations as they become more empowered to identify those projects whose invaluable insight will drive better decision-making and innovation.

By Senthil Ravindran, EVP and global head of cloud transformation and digital innovation, Virtusa

More:
Why 2020 will be the Year of Automated Machine Learning - Gigabit Magazine - Technology News, Magazine and Website

How to Pick a Winning March Madness Bracket – Machine Learning Times – machine learning & data science news – The Predictive Analytics Times

Introduction

In 2019, over 40 million Americans wagered money on March Madness brackets, according to the American Gaming Association. Most of this money was bet in bracket pools, which consist of a group of people each entering their predictions of the NCAA tournament games along with a buy-in. The bracket that comes closest to being right wins. If you also consider the bracket pools where only pride is at stake, the number of participants is much greater. Despite all this attention, most do not give themselves the best chance to win because they are focused on the wrong question.

The Right Question

Mistake #3 in Dr. John Elders Top 10 Data Science Mistakes is to ask the wrong question. A cornerstone of any successful analytics project starts with having the right project goal; that is, to aim at the right target. If youre like most people, when you fill out your bracket, you ask yourself, What do I think is most likely to happen? This is the wrong question to ask if you are competing in a pool because the objective is to win money, NOT to make the most correct bracket. The correct question to ask is: What bracket gives me the best chance to win $? (This requires studying the payout formula. I used ESPN standard scoring (320 possible points per round) with all pool money given to the winner. (10 points are awarded for each correct win in the round of 64, 20 in the round of 32, and so forth, doubling until 320 are awarded for a correct championship call.))

While these questions seem similar, the brackets they produce will be significantly different.

If you ignore your opponents and pick the teams with the best chance to win games you will reduce your chance of winning money. Even the strongest team is unlikely to win it all, and even if they do, plenty of your opponents likely picked them as well. The best way to optimize your chances of making money is to choose a champion team with a good chance to win who is unpopular with your opponents.

Knowing how other people in your pool are filling out their brackets is crucial, because it helps you identify teams that are less likely to be picked. One way to see how others are filling out their brackets is via ESPNs Who Picked Whom page (Figure 1). It summarizes how often each team is picked to advance in each round across all ESPN brackets and is a great first step towards identifying overlooked teams.

Figure 1. ESPNs Who Picked Whom Tournament Challenge page

For a team to be overlooked, their perceived chance to win must be lower than their actual chance to win. The Who Picked Whom page provides an estimate of perceived chance to win, but to find undervalued teams we also need estimates for actual chance to win. This can range from a complex prediction model to your own gut feeling. Two sources I trust are 538s March Madness predictions and Vegas future betting odds. 538s predictions are based on a combination of computer rankings and has predicted performance well in past tournaments. There is also reason to pay attention to Vegas odds, because if they were too far off, the sportsbooks would lose money.

However, both sources have their flaws. 538 is based on computer ratings, so while they avoid human bias, they miss out on expert intuition. Most Vegas sportsbooks likely use both computer ratings and expert intuition to create their betting odds, but they are strongly motivated to have equal betting on all sides, so they are significantly affected by human perception. For example, if everyone was betting on Duke to win the NCAA tournament, they would increase Dukes betting odds so that more people would bet on other teams to avoid large losses. When calculating win probabilities for this article, I chose to average 538 and Vegas predictions to obtain a balance I was comfortable with.

Lets look at last year. Figure 2 compares a teams perceived chance to win (based on ESPNs Who Picked Whom) to their actual chance to win (based on 538-Vegas averaged predictions) for the leading 2019 NCAA Tournament teams. (Probabilities for all 64 teams in the tournament appear in Table 6 in the Appendix.)

Figure 2. Actual versus perceived chance to win March Madness for 8 top teams

As shown in Figure 2, participants over-picked Duke and North Carolina as champions and under-picked Gonzaga and Virginia. Many factors contributed to these selections; for example, most predictive models, avid sports fans, and bettors agreed that Duke was the best team last year. If you were the picking the bracket most likely to occur, then selecting Duke as champion was the natural pick. But ignoring selections made by others in your pool wont help you win your pool.

While this graph is interesting, how can we turn it into concrete takeaways? Gonzaga and Virginia look like good picks, but what about the rest of the teams hidden in that bottom left corner? Does it ever make sense to pick teams like Texas Tech, who had a 2.6% chance to win it all, and only 0.9% of brackets picking them? How much does picking an overvalued favorite like Duke hurt your chances of winning your pool?

To answer these questions, I simulated many bracket pools and found that the teams in Gonzagas and Virginias spots are usually the best picksthe most undervalued of the top four to five favorites. However, as the size of your bracket pool increases, overlooked lower seeds like third-seeded Texas Tech or fourth-seeded Virginia Tech become more attractive. The logic for this is simple: the chance that one of these teams wins it all is small, but if they do, then you probably win your pool regardless of the number of participants, because its likely no one else picked them.

Simulations Methodology

To simulate bracket pools, I first had to simulate brackets. I used an average of the Vegas and 538 predictions to run many simulations of the actual events of March Madness. As discussed above, this method isnt perfect but its a good approximation. Next, I used the Who Picked Whom page to simulate many human-created brackets. For each human bracket, I calculated the chance it would win a pool of size by first finding its percentile ranking among all human brackets assuming one of the 538-Vegas simulated brackets were the real events. This percentile is basically the chance it is better than a random bracket. I raised the percentile to the power, and then repeated for all simulated 538-Vegas brackets, averaging the results to get a single win probability per bracket.

For example, lets say for one 538-Vegas simulation, my bracket is in the 90th percentile of all human brackets, and there are nine other people in my pool. The chance I win the pool would be. If we assumed a different simulation, then my bracket might only be in the 20th percentile, which would make my win probability . By averaging these probabilities for all 538-Vegas simulations we can calculate an estimate of a brackets win probability in a pool of size , assuming we trust our input sources.

Results

I used this methodology to simulate bracket pools with 10, 20, 50, 100, and 1000 participants. The detailed results of the simulations are shown in Tables 1-6 in the Appendix. Virginia and Gonzaga were the best champion picks when the pool had 50 or fewer participants. Yet, interestingly, Texas Tech and Purdue (3-seeds) and Virginia Tech (4-seed) were as good or better champion picks when the pool had 100 or more participants.

General takeaways from the simulations:

Additional Thoughts

We have assumed that your local pool makes their selections just like the rest of America, which probably isnt true. If you live close to a team thats in the tournament, then that team will likely be over-picked. For example, I live in Charlottesville (home of the University of Virginia), and Virginia has been picked as the champion in roughly 40% of brackets in my pools over the past couple of years. If you live close to a team with a high seed, one strategy is to start with ESPNs Who Picked Whom odds, and then boost the odds of the popular local team and correspondingly drop the odds for all other teams. Another strategy Ive used is to ask people in my pool who they are picking. It is mutually beneficial, since Id be less likely to pick whoever they are picking.

As a parting thought, I want to describe a scenario from the 2019 NCAA tournament some of you may be familiar with. Auburn, a five seed, was winning by two points in the waning moments of the game, when they inexplicably fouled the other team in the act of shooting a three-point shot with one second to go. The opposing player, a 78% free throw shooter, stepped to the line and missed two out of three shots, allowing Auburn to advance. This isnt an alternate reality; this is how Auburn won their first-round game against 12-seeded New Mexico State. They proceeded to beat powerhouses Kansas, North Carolina, and Kentucky on their way to the Final Four, where they faced the exact same situation against Virginia. Virginias Kyle Guy made all his three free throws, and Virginia went on to win the championship.

I add this to highlight an important qualifier of this analysisits impossible to accurately predict March Madness. Were the people who picked Auburn to go to the Final Four geniuses? Of course not. Had Terrell Brown of New Mexico State made his free throws, they would have looked silly. There is no perfect model that can predict the future, and those who do well in the pools are not basketball gurus, they are just lucky. Implementing the strategies talked about here wont guarantee a victory; they just reduce the amount of luck you need to win. And even with the best modelsyoull still need a lot of luck. It is March Madness, after all.

Appendix: Detailed Analyses by Bracket Sizes

At baseline (randomly), a bracket in a ten-person pool has a 10% chance to win. Table 1 shows how that chance changes based on the round selected for a given team to lose. For example, brackets that had Virginia losing in the Round of 64 won a ten-person pool 4.2% of the time, while brackets that picked them to win it all won 15.1% of the time. As a reminder, these simulations were done with only pre-tournament informationthey had no data indicating that Virginia was the eventual champion, of course.

Table 1 Probability that a bracket wins a ten-person bracket pool given that it had a given team (row) making it to a given round (column) and no further

In ten-person pools, the best performing brackets were those that picked Virginia or Gonzaga as the champion, winning 15% of the time. Notably, early round picks did not have a big influence on the chance of winning the pool, the exception being brackets that had a one or two seed losing in the first round. Brackets that had a three seed or lower as champion performed very poorly, but having lower seeds making the Final Four did not have a significant impact on chance of winning.

Table 2 shows the same information for bracket pools with 20 people. The baseline chance is now 5%, and again the best performing brackets are those that picked Virginia or Gonzaga to win. Similarly, picks in the first few rounds do not have much influence. Michigan State has now risen to the third best Champion pick, and interestingly Purdue is the third best runner-up pick.

Table 2 Probability that a bracket wins a 20-person bracket pool given that it had a given team (row) making it to a given round (column) and no further

When the bracket pool size increases to 50, as shown in Table 3, picking the overvalued favorites (Duke and North Carolina) as champions significantly lowers your baseline chances (2%). The slightly undervalued two and three seeds now raise your baseline chances when selected as champions, but Virginia and Gonzaga remain the best picks.

Table 3 Probability that a bracket wins a 50-person bracket pool given that it had a given team (row) making it to a given round (column) and no further

With the bracket pool size at 100 (Table 4), Virginia and Gonzaga are joined by undervalued three-seeds Texas Tech and Purdue. Picking any of these four raises your baseline chances from 1% to close to 2%. Picking Duke or North Carolina again hurts your chances.

Table 4 Probability that a bracket wins a 100-person bracket pool given that it had a given team (row) making it to a given round (column) and no further

When the bracket pool grows to 1000 people (Table 5), there is a complete changing of the guard. Virginia Tech is now the optimal champion pick, raising your baseline chance of winning your pool from 0.1% to 0.4%, followed by the three-seeds and sixth-seeded Iowa State are the best champion picks.

Table 5 Probability that a bracket wins a 1000-person bracket pool given that it had a given team (row) making it to a given round (column) and no further

For Reference, Table 6 shows the actual chance to win versus the chance of being picked to win for all teams seeded seventh or better. These chances are derived from the ESPN Who Picked Whom page and the 538-Vegas predictions. The data for the top eight teams in Table 6 is plotted in Figure 2. Notably, Duke and North Carolina are overvalued, while the rest are all at least slightly undervalued.

The teams in bold in Table 6 are examples of teams that are good champion picks in larger pools. They all have a high ratio of actual chance to win to chance of being picked to win, but a low overall actual chance to win.

Table 6 Actual odds to win Championship vs Chance Team is Picked to Win Championship.

Undervalued teams in green; over-valued in red.

About the Author

Robert Robison is an experienced engineer and data analyst who loves to challenge assumptions and think outside the box. He enjoys learning new skills and techniques to reveal value in data. Robert earned a BS in Aerospace Engineering from the University of Virginia, and is completing an MS in Analytics through Georgia Tech.

In his free time, Robert enjoys playing volleyball and basketball, watching basketball and football, reading, hiking, and doing anything with his wife, Lauren.

Go here to read the rest:
How to Pick a Winning March Madness Bracket - Machine Learning Times - machine learning & data science news - The Predictive Analytics Times

Inspur Re-Elected as Member of SPEC OSSC and Chair of SPEC Machine Learning – HPCwire

SAN JOSE, Calif., Feb. 21, 2020 Recently, the international evaluation agency Standard Performance Evaluation Corporation (SPEC) has finalized the election of new Open System Steering Committee (OSSC) executive members, which include Inspur, Intel, AMD, IBM, Oracle and other three companies.

It is worth noting that Inspur, a re-elected OSSC member, was also re-elected as the chair of the SPEC Machine Learning (SPEC ML) working group. The development plan of ML test benchmark proposed by Inspur has been approved by members which aims to provide users with standard on evaluating machine learning computing performance.

SPEC is a global and authoritative third-party application performance testing organization established in 1988, which aims to establish and maintain a series of performance, function, and energy consumption benchmarks, and provides important reference standards for users to evaluate the performance and energy efficiency of computing systems. The organization consists of 138 well-known technology companies, universities and research institutions in the industry such as Intel, Oracle, NVIDIA, Apple, Microsoft, Inspur, Berkeley, Lawrence Berkeley National Laboratory, etc., and its test standard has become an important indicator for many users to evaluate overall computing performance.

The OSSC executive committee is the permanent body of the SPEC OSG (short for Open System Group, the earliest and largest committee established by SPEC) and is responsible for supervising and reviewing the daily work of major technical groups of OSG, major issues, additions and deletions of members, development direction of research and decision of testing standards, etc. Meanwhile, OSSC executive committee uniformly manages the development and maintenance of SPEC CPU, SPEC Power, SPEC Java, SPEC Virt and other benchmarks.

Machine Learning is an important direction in AI development. Different computing accelerator technologies such as GPU, FPGA, ASIC, and different AI frameworks such as TensorFlow and Pytorch provide customers with a rich marketplace of options. However, the next important thing for the customer to consider is how to evaluate the computing efficiency of various AI computing platforms. Both enterprises and research institutions require a set of benchmarks and methods to effectively measure performance to find the right solution for their needs.

In the past year, Inspur has done much to advance the SPEC ML standard specific component development, contributing test models, architectures, use cases, methods and so on, which have been duly acknowledged by SPEC organization and its members.

Joe Qiao, General Manager of Inspur Solution and Evaluation Department, believes that SPEC ML can provide an objective comparison standard for AI / ML applications, which will help users choose a computing system that best meet their application needs. Meanwhile, it also provides a unified measurement standard for manufacturers to improve their technologies and solution capabilities, advancing the development of the AI industry.

About Inspur

Inspur is a leading provider of data center infrastructure, cloud computing, and AI solutions, ranking among the worlds top 3 server manufacturers. Through engineering and innovation, Inspur delivers cutting-edge computing hardware design and extensive product offerings to address important technology arenas like open computing, cloud data center, AI and deep learning. Performance-optimized and purpose-built, our world-class solutions empower customers to tackle specific workloads and real-world challenges. To learn more, please go towww.inspursystems.com.

Source: Inspur

Read the original:
Inspur Re-Elected as Member of SPEC OSSC and Chair of SPEC Machine Learning - HPCwire