Archive for the ‘Alphazero’ Category

How AI is reinventing what computers are – MIT Technology Review

Fall 2021: the season of pumpkins, pecan pies, and peachy new phones. Every year, right on cue, Apple, Samsung, Google, and others drop their latest releases. These fixtures in the consumer tech calendar no longer inspire the surprise and wonder of those heady early days. But behind all the marketing glitz, theres something remarkable going on.

Googles latest offering, the Pixel 6, is the first phone to have a separate chip dedicated to AI that sits alongside its standard processor. And the chip that runs the iPhone has for the last couple of years contained what Apple calls a neural engine, also dedicated to AI. Both chips are better suited to the types of computations involved in training and running machine-learning models on our devices, such as the AI that powers your camera. Almost without our noticing, AI has become part of our day-to-day lives. And its changing how we think about computing.

What does that mean? Well, computers havent changed much in 40 or 50 years. Theyre smaller and faster, but theyre still boxes with processors that run instructions from humans. AI changes that on at least three fronts: how computers are made, how theyre programmed, and how theyre used. Ultimately, it will change what they are for.

The core of computing is changing from number-crunching to decision-making, says Pradeep Dubey, director of the parallel computing lab at Intel. Or, as MIT CSAIL director Daniela Rus puts it, AI is freeing computers from their boxes.

The first change concerns how computersand the chips that control themare made. Traditional computing gains came as machines got faster at carrying out one calculation after another. For decades the world benefited from chip speed-ups that came with metronomic regularity as chipmakers kept up with Moores Law.

But the deep-learning models that make current AI applications work require a different approach: they need vast numbers of less precise calculations to be carried out all at the same time. That means a new type of chip is required: one that can move data around as quickly as possible, making sure its available when and where its needed. When deep learning exploded onto the scene a decade or so ago, there were already specialty computer chips available that were pretty good at this: graphics processing units, or GPUs, which were designed to display an entire screenful of pixels dozens of times a second.

Anything can become a computer. Indeed, most household objects, from toothbrushes to light switches to doorbells, already come in a smart version.

Now chipmakers like Intel and Arm and Nvidia, which supplied many of the first GPUs, are pivoting to make hardware tailored specifically for AI. Google and Facebook are also forcing their way into this industry for the first time, in a race to find an AI edge through hardware.

For example, the chip inside the Pixel 6 is a new mobile version of Googles tensor processing unit, or TPU. Unlike traditional chips, which are geared toward ultrafast, precise calculations, TPUs are designed for the high-volume but low-precision calculations required by neural networks. Google has used these chips in-house since 2015: they process peoples photos and natural-language search queries. Googles sister company DeepMind uses them to train its AIs.

In the last couple of years, Google has made TPUs available to other companies, and these chipsas well as similar ones being developed by othersare becoming the default inside the worlds data centers.

AI is even helping to design its own computing infrastructure. In 2020, Google used a reinforcement-learning algorithma type of AI that learns how to solve a task through trial and errorto design the layout of a new TPU. The AI eventually came up with strange new designs that no human would think ofbut they worked. This kind of AI could one day develop better, more efficient chips.

The second change concerns how computers are told what to do. For the past 40 years we have been programming computers; for the next 40 we will be training them, says Chris Bishop, head of Microsoft Research in the UK.

Traditionally, to get a computer to do something like recognize speech or identify objects in an image, programmers first had to come up with rules for the computer.

With machine learning, programmers no longer write rules. Instead, they create a neural network that learns those rules for itself. Its a fundamentally different way of thinking.

More here:
How AI is reinventing what computers are - MIT Technology Review

graphneural.network – Spektral

Spektral is a Python library for graph deep learning, based on the Keras API and TensorFlow 2.The main goal of this project is to provide a simple but flexible framework for creating graph neural networks (GNNs).

You can use Spektral for classifying the users of a social network, predicting molecular properties, generating new graphs with GANs, clustering nodes, predicting links, and any other task where data is described by graphs.

Spektral implements some of the most popular layers for graph deep learning, including:

and many others (see convolutional layers).

You can also find pooling layers, including:

Spektral also includes lots of utilities for representing, manipulating, and transforming graphs in your graph deep learning projects.

See how to get started with Spektral and have a look at the examples for some templates.

The source code of the project is available on Github.Read the documentation here.

If you want to cite Spektral in your work, refer to our paper:

Graph Neural Networks in TensorFlow and Keras with SpektralDaniele Grattarola and Cesare Alippi

Spektral is compatible with Python 3.6 and above, and is tested on the latest versions of Ubuntu, MacOS, and Windows. Other Linux distros should work as well.

The simplest way to install Spektral is from PyPi:

To install Spektral from source, run this in a terminal:

To install Spektral on Google Colab:

The 1.0 release of Spektral is an important milestone for the library and brings many new features and improvements.

If you have already used Spektral in your projects, the only major change that you need to be aware of is the new datasets API.

This is a summary of the new features and changes:

Spektral is an open-source project available on Github, and contributions of all types are welcome. Feel free to open a pull request if you have something interesting that you want to add to the framework.

The contribution guidelines are available here and a list of feature requests is available here.

See original here:
graphneural.network - Spektral

MuZero – Wikipedia

Game-playing artificial intelligence

MuZero is a computer program developed by artificial intelligence research company DeepMind to master games without knowing their rules.[1][2][3] Its release in 2019 included benchmarks of its performance in go, chess, shogi, and a standard suite of Atari games. The algorithm uses an approach similar to AlphaZero. It matched AlphaZero's performance in chess and shogi, improved on its performance in Go (setting a new world record), and improved on the state of the art in mastering a suite of 57 Atari games (the Arcade Learning Environment), a visually-complex domain.

MuZero was trained via self-play, with no access to rules, opening books, or endgame tablebases. The trained algorithm used the same convolutional and residual algorithms as AlphaZero, but with 20% fewer computation steps per node in the search tree.[4]

MuZero really is discovering for itself how to build a model and understand it just from first principles.

On November 19, 2019, the DeepMind team released a preprint introducing MuZero.

MuZero (MZ) is a combination of the high-performance planning of the AlphaZero (AZ) algorithm with approaches to model-free reinforcement learning. The combination allows for more efficient training in classical planning regimes, such as Go, while also handling domains with much more complex inputs at each stage, such as visual video games.

MuZero was derived directly from AZ code, sharing its rules for setting hyperparameters. Differences between the approaches include:[6]

The previous state of the art technique for learning to play the suite of Atari games was R2D2, the Recurrent Replay Distributed DQN.[7]

MuZero surpassed both R2D2's mean and median performance across the suite of games, though it did not do better in every game.

MuZero used 16 third-generation tensor processing units (TPUs) for training, and 1000 TPUs for selfplay for board games, with 800 simulations per step and 8 TPUs for training and 32 TPUs for selfplay for Atari games, with 50 simulations per step.

AlphaZero used 64 first-generation TPUs for training, and 5000 second-generation TPUs for selfplay. As TPU design has improved (third-generation chips are 2x as powerful individually as second-generation chips, with further advances in bandwidth and networking across chips in a pod), these are comparable training setups.

R2D2 was trained for 5 days through 2M training steps.

MuZero matched AlphaZero's performance in chess and Shogi after roughly 1 million training steps. It matched AZ's performance in Go after 500 thousand training steps and surpassed it by 1 million steps. It matched R2D2's mean and median performance across the Atari game suite after 500 thousand training steps and surpassed it by 1 million steps, though it never performed well on 6 games in the suite.

MuZero was viewed as a significant advancement over AlphaZero,[8] and a generalizable step forward in unsupervised learning techniques.[9][10] The work was seen as advancing understanding of how to compose systems from smaller components, a systems-level development more than a pure machine-learning development.[11]

While only pseudocode was released by the development team, Werner Duvaud produced an open source implementation based on that.[12]

MuZero has been used as a reference implementation in other work, for instance as a way to generate model-based behavior.[13]

View original post here:
MuZero - Wikipedia

Bin Yu

I'm Bin Yu, the head of the Yu Group at Berkeley, which consists of 15-20 students and postdocs from Statistics and EECS. I was formally trained as a statistician, but my research interests and achievements extend beyond the realm of statistics. Together with my group, my work has leveraged new computational developments to solve important scientific problems by combining novel statistical machine learning approaches with the domain expertise of my many collaborators in neuroscience, genomics and precision medicine. We also develop relevant theory to understand random forests and deep learning for insight into and guidance for practice.

We have developed the PCS framework for veridical data science (or responsible, reliable, and transparent data analysis and decision-making). PCS stands for predictability, computability and stability, and it unifies, streamlines, and expands on ideas and best practices of machine learning and statistics.

In order to augment empirical evidence for decision-making, we are investigating statistical machine learning methods/algorithms (and associated statistical inference problems) such as dictionary learning, non-negative matrix factorization (NMF), EM and deep learning (CNNs and LSTMs), and heterogeneous effect estimation in randomized experiments (X-learner). Our recent algorithms include staNMF for unsupervised learning, iterative Random Forests (iRF) and signed iRF (s-iRF) for discovering predictive and stable high-order interactions in supervised learning, contextual decomposition (CD) and aggregated contextual decomposition (ACD) for interpretation of Deep Neural Networks (DNNs).

Stability expanded, in reality. Harvard Data Science Review (HDSR), 2020.

Data science process: one culture. JASA, 2020.

Minimum information about clinical artificial intelligence modeling: the MI-CLAIM checklist, Nature Medicine, 2020.

Veridical data science (PCS framework), PNAS, 2020 (QnAs with Bin Yu)

Breiman Lecture (video) at NeurIPS "Veridical data Science" (PCS framework and iRF), 2019; updated slides, 2020

Definitions, methods and applications in interpretable machine learning, PNAS, 2019

Data wisdom for data science (blog), 2015

IMS Presidential Address "Let us own data science", IMS Bulletin, 2014

Stability, Bernoulli, 2013

Embracing statistical challenges in the IT age, Technometrics, 2007

Honorary Doctorate, University of Lausanne (UNIL) (Faculty of Business and Economics), June 4, 2021 (Interview of Bin Yu by journalist Nathalie Randin, with an introduction by Dean Jean-Philippe Bonardi of UNIL in French (English translation))

CDSS news on our PCS framework: "A better framework for more robust, trustworthy data science", Oct. 2020

UC Berkeley to lead $10M NSF/Simons Foundation program to investigate theoretical underpinnings of deep learning, Aug. 25, 2020

Curating COVID-19 data repository and forecasting county-level death counts in the US, 2020

Interviewed by PBS Nova about AlphaZero, 2018

Mapping a cell's destiny, 2016

Seeking Data Wisdom, 2015

Member, National Academy of Sciences, 2014

Fellow, American Academy of Arts and Sciences, 2013

One of the 50 best inventions of 2011 by Time Magazine, 2011

The Economist Article, 2011

ScienceMatters @ Berkeley. Dealing with Cloudy Data, 2004

See original here:
Bin Yu

A general reinforcement learning algorithm that masters …

One program to rule them all

Computers can beat humans at increasingly complex games, including chess and Go. However, these programs are typically constructed for a particular game, exploiting its properties, such as the symmetries of the board on which it is played. Silver et al. developed a program called AlphaZero, which taught itself to play Go, chess, and shogi (a Japanese version of chess) (see the Editorial, and the Perspective by Campbell). AlphaZero managed to beat state-of-the-art programs specializing in these three games. The ability of AlphaZero to adapt to various game rules is a notable step toward achieving a general game-playing system.

Science, this issue p. 1140; see also pp. 1087 and 1118

The game of chess is the longest-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. By contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go by reinforcement learning from self-play. In this paper, we generalize this approach into a single AlphaZero algorithm that can achieve superhuman performance in many challenging games. Starting from random play and given no domain knowledge except the game rules, AlphaZero convincingly defeated a world champion program in the games of chess and shogi (Japanese chess), as well as Go.

The study of computer chess is as old as computer science itself. Charles Babbage, Alan Turing, Claude Shannon, and John von Neumann devised hardware, algorithms, and theory to analyze and play the game of chess. Chess subsequently became a grand challenge task for a generation of artificial intelligence researchers, culminating in high-performance computer chess programs that play at a superhuman level (1, 2). However, these systems are highly tuned to their domain and cannot be generalized to other games without substantial human effort, whereas general game-playing systems (3, 4) remain comparatively weak.

A long-standing ambition of artificial intelligence has been to create programs that can instead learn for themselves from first principles (5, 6). Recently, the AlphaGo Zero algorithm achieved superhuman performance in the game of Go by representing Go knowledge with the use of deep convolutional neural networks (7, 8), trained solely by reinforcement learning from games of self-play (9). In this paper, we introduce AlphaZero, a more generic version of the AlphaGo Zero algorithm that accommodates, without special casing, a broader class of game rules. We apply AlphaZero to the games of chess and shogi, as well as Go, by using the same algorithm and network architecture for all three games. Our results demonstrate that a general-purpose reinforcement learning algorithm can learn, tabula rasawithout domain-specific human knowledge or data, as evidenced by the same algorithm succeeding in multiple domainssuperhuman performance across multiple challenging games.

A landmark for artificial intelligence was achieved in 1997 when Deep Blue defeated the human world chess champion (1). Computer chess programs continued to progress steadily beyond human level in the following two decades. These programs evaluate positions by using handcrafted features and carefully tuned weights, constructed by strong human players and programmers, combined with a high-performance alpha-beta search that expands a vast search tree by using a large number of clever heuristics and domain-specific adaptations. In (10) we describe these augmentations, focusing on the 2016 Top Chess Engine Championship (TCEC) season 9 world champion Stockfish (11); other strong chess programs, including Deep Blue, use very similar architectures (1, 12).

In terms of game tree complexity, shogi is a substantially harder game than chess (13, 14): It is played on a larger board with a wider variety of pieces; any captured opponent piece switches sides and may subsequently be dropped anywhere on the board. The strongest shogi programs, such as the 2017 Computer Shogi Association (CSA) world champion Elmo, have only recently defeated human champions (15). These programs use an algorithm similar to those used by computer chess programs, again based on a highly optimized alpha-beta search engine with many domain-specific adaptations.

AlphaZero replaces the handcrafted knowledge and domain-specific augmentations used in traditional game-playing programs with deep neural networks, a general-purpose reinforcement learning algorithm, and a general-purpose tree search algorithm.

Instead of a handcrafted evaluation function and move-ordering heuristics, AlphaZero uses a deep neural network (p, v) = f(s) with parameters . This neural network f(s) takes the board position s as an input and outputs a vector of move probabilities p with components pa = Pr(a|s) for each action a and a scalar value v estimating the expected outcome z of the game from position s, . AlphaZero learns these move probabilities and value estimates entirely from self-play; these are then used to guide its search in future games.

Instead of an alpha-beta search with domain-specific enhancements, AlphaZero uses a general-purpose Monte Carlo tree search (MCTS) algorithm. Each search consists of a series of simulated games of self-play that traverse a tree from root state sroot until a leaf state is reached. Each simulation proceeds by selecting in each state s a move a with low visit count (not previously frequently explored), high move probability, and high value (averaged over the leaf states of simulations that selected a from s) according to the current neural network f. The search returns a vector representing a probability distribution over moves, a = Pr(a|sroot).

The parameters of the deep neural network in AlphaZero are trained by reinforcement learning from self-play games, starting from randomly initialized parameters . Each game is played by running an MCTS from the current position sroot = st at turn t and then selecting a move, at ~ t, either proportionally (for exploration) or greedily (for exploitation) with respect to the visit counts at the root state. At the end of the game, the terminal position sT is scored according to the rules of the game to compute the game outcome z: 1 for a loss, 0 for a draw, and +1 for a win. The neural network parameters are updated to minimize the error between the predicted outcome vt and the game outcome z and to maximize the similarity of the policy vector pt to the search probabilities t. Specifically, the parameters are adjusted by gradient descent on a loss function l that sums over mean-squared error and cross-entropy losses(1)where c is a parameter controlling the level of L2 weight regularization. The updated parameters are used in subsequent games of self-play.

The AlphaZero algorithm described in this paper [see (10) for the pseudocode] differs from the original AlphaGo Zero algorithm in several respects.

AlphaGo Zero estimated and optimized the probability of winning, exploiting the fact that Go games have a binary win or loss outcome. However, both chess and shogi may end in drawn outcomes; it is believed that the optimal solution to chess is a draw (1618). AlphaZero instead estimates and optimizes the expected outcome.

The rules of Go are invariant to rotation and reflection. This fact was exploited in AlphaGo and AlphaGo Zero in two ways. First, training data were augmented by generating eight symmetries for each position. Second, during MCTS, board positions were transformed by using a randomly selected rotation or reflection before being evaluated by the neural network, so that the Monte Carlo evaluation was averaged over different biases. To accommodate a broader class of games, AlphaZero does not assume symmetry; the rules of chess and shogi are asymmetric (e.g., pawns only move forward, and castling is different on kingside and queenside). AlphaZero does not augment the training data and does not transform the board position during MCTS.

In AlphaGo Zero, self-play games were generated by the best player from all previous iterations. After each iteration of training, the performance of the new player was measured against the best player; if the new player won by a margin of 55%, then it replaced the best player. By contrast, AlphaZero simply maintains a single neural network that is updated continually rather than waiting for an iteration to complete. Self-play games are always generated by using the latest parameters for this neural network.

As in AlphaGo Zero, the board state is encoded by spatial planes based only on the basic rules for each game. The actions are encoded by either spatial planes or a flat vector, again based only on the basic rules for each game (10).

AlphaGo Zero used a convolutional neural network architecture that is particularly well-suited to Go: The rules of the game are translationally invariant (matching the weight-sharing structure of convolutional networks) and are defined in terms of liberties corresponding to the adjacencies between points on the board (matching the local structure of convolutional networks). By contrast, the rules of chess and shogi are position dependent (e.g., pawns may move two steps forward from the second rank and promote on the eighth rank) and include long-range interactions (e.g., the queen may traverse the board in one move). Despite these differences, AlphaZero uses the same convolutional network architecture as AlphaGo Zero for chess, shogi, and Go.

The hyperparameters of AlphaGo Zero were tuned by Bayesian optimization. In AlphaZero, we reuse the same hyperparameters, algorithm settings, and network architecture for all games without game-specific tuning. The only exceptions are the exploration noise and the learning rate schedule [see (10) for further details].

We trained separate instances of AlphaZero for chess, shogi, and Go. Training proceeded for 700,000 steps (in mini-batches of 4096 training positions) starting from randomly initialized parameters. During training only, 5000 first-generation tensor processing units (TPUs) (19) were used to generate self-play games, and 16 second-generation TPUs were used to train the neural networks. Training lasted for approximately 9 hours in chess, 12 hours in shogi, and 13 days in Go (see table S3) (20). Further details of the training procedure are provided in (10).

Figure 1 shows the performance of AlphaZero during self-play reinforcement learning, as a function of training steps, on an Elo (21) scale (22). In chess, AlphaZero first outperformed Stockfish after just 4 hours (300,000 steps); in shogi, AlphaZero first outperformed Elmo after 2 hours (110,000 steps); and in Go, AlphaZero first outperformed AlphaGo Lee (9) after 30 hours (74,000 steps). The training algorithm achieved similar performance in all independent runs (see fig. S3), suggesting that the high performance of AlphaZeros training algorithm is repeatable.

Elo ratings were computed from games between different players where each player was given 1 s per move. (A) Performance of AlphaZero in chess compared with the 2016 TCEC world champion program Stockfish. (B) Performance of AlphaZero in shogi compared with the 2017 CSA world champion program Elmo. (C) Performance of AlphaZero in Go compared with AlphaGo Lee and AlphaGo Zero (20 blocks over 3 days).

We evaluated the fully trained instances of AlphaZero against Stockfish, Elmo, and the previous version of AlphaGo Zero in chess, shogi, and Go, respectively. Each program was run on the hardware for which it was designed (23): Stockfish and Elmo used 44 central processing unit (CPU) cores (as in the TCEC world championship), whereas AlphaZero and AlphaGo Zero used a single machine with four first-generation TPUs and 44 CPU cores (24). The chess match was played against the 2016 TCEC (season 9) world champion Stockfish [see (10) for details]. The shogi match was played against the 2017 CSA world champion version of Elmo (10). The Go match was played against the previously published version of AlphaGo Zero [also trained for 700,000 steps (25)]. All matches were played by using time controls of 3 hours per game, plus an additional 15 s for each move.

In Go, AlphaZero defeated AlphaGo Zero (9), winning 61% of games. This demonstrates that a general approach can recover the performance of an algorithm that exploited board symmetries to generate eight times as much data (see fig. S1).

In chess, AlphaZero defeated Stockfish, winning 155 games and losing 6 games out of 1000 (Fig. 2). To verify the robustness of AlphaZero, we played additional matches that started from common human openings (Fig. 3). AlphaZero defeated Stockfish in each opening, suggesting that AlphaZero has mastered a wide spectrum of chess play. The frequency plots in Fig. 3 and the time line in fig. S2 show that common human openings were independently discovered and played frequently by AlphaZero during self-play training. We also played a match that started from the set of opening positions used in the 2016 TCEC world championship; AlphaZero won convincingly in this match, too (26) (fig. S4). We played additional matches against the most recent development version of Stockfish (27) and a variant of Stockfish that uses a strong opening book (28). AlphaZero won all matches by a large margin (Fig. 2).

(A) Tournament evaluation of AlphaZero in chess, shogi, and Go in matches against, respectively, Stockfish, Elmo, and the previously published version of AlphaGo Zero (AG0) that was trained for 3 days. In the top bar, AlphaZero plays white; in the bottom bar, AlphaZero plays black. Each bar shows the results from AlphaZeros perspective: win (W; green), draw (D; gray), or loss (L; red). (B) Scalability of AlphaZero with thinking time compared with Stockfish and Elmo. Stockfish and Elmo always receive full time (3 hours per game plus 15 s per move); time for AlphaZero is scaled down as indicated. (C) Extra evaluations of AlphaZero in chess against the most recent version of Stockfish at the time of writing (27) and against Stockfish with a strong opening book (28). Extra evaluations of AlphaZero in shogi were carried out against another strong shogi program, Aperyqhapaq (29), at full time controls and against Elmo under 2017 CSA world championship time controls (10 min per game and 10 s per move). (D) Average result of chess matches starting from different opening positions, either common human positions (see also Fig. 3) or the 2016 TCEC world championship opening positions (see also fig. S4), and average result of shogi matches starting from common human positions (see also Fig. 3). CSA world championship games start from the initial board position. Match conditions are summarized in tables S8 and S9.

AlphaZero plays against (A) Stockfish in chess and (B) Elmo in shogi. In the left bar, AlphaZero plays white, starting from the given position; in the right bar, AlphaZero plays black. Each bar shows the results from AlphaZeros perspective: win (green), draw (gray), or loss (red). The percentage frequency of self-play training games in which this opening was selected by AlphaZero is plotted against the duration of training, in hours.

Table S6 shows 20 chess games played by AlphaZero in its matches against Stockfish. In several games, AlphaZero sacrificed pieces for long-term strategic advantage, suggesting that it has a more fluid, context-dependent positional evaluation than the rule-based evaluations used by previous chess programs.

In shogi, AlphaZero defeated Elmo, winning 98.2% of games when playing black and 91.2% overall. We also played a match under the faster time controls used in the 2017 CSA world championship and against another state-of-the-art shogi program (29); AlphaZero again won both matches by a wide margin (Fig. 2).

Table S7 shows 10 shogi games played by AlphaZero in its matches against Elmo. The frequency plots in Fig. 3 and the time line in fig. S2 show that AlphaZero frequently plays one of the two most common human openings but rarely plays the second, deviating on the very first move.

AlphaZero searches just 60,000 positions per second in chess and shogi, compared with 60 million for Stockfish and 25 million for Elmo (table S4). AlphaZero may compensate for the lower number of evaluations by using its deep neural network to focus much more selectively on the most promising variations (Fig. 4 provides an example from the match against Stockfish)arguably a more humanlike approach to searching, as originally proposed by Shannon (30). AlphaZero also defeated Stockfish when given as much thinking time as its opponent (i.e., searching as many positions) and won 46% of games against Elmo when given as much time (i.e., searching as many positions) (Fig. 2). The high performance of AlphaZero with the use of MCTS calls into question the widely held belief (31, 32) that alpha-beta search is inherently superior in these domains.

The search is illustrated for a position (inset) from game 1 (table S6) between AlphaZero (white) and Stockfish (black) after 29. ... Qf8. The internal state of AlphaZeros MCTS is summarized after 102, ..., 106 simulations. Each summary shows the 10 most visited states. The estimated value is shown in each state, from whites perspective, scaled to the range [0, 100]. The visit count of each state, relative to the root state of that tree, is proportional to the thickness of the border circle. AlphaZero considers 30. c6 but eventually plays 30. d5.

The game of chess represented the pinnacle of artificial intelligence research over several decades. State-of-the-art programs are based on powerful engines that search many millions of positions, leveraging handcrafted domain expertise and sophisticated domain adaptations. AlphaZero is a generic reinforcement learning and search algorithmoriginally devised for the game of Gothat achieved superior results within a few hours, searching as many positions, given no domain knowledge except the rules of chess. Furthermore, the same algorithm was applied without modification to the more challenging game of shogi, again outperforming state-of-the-art programs within a few hours. These results bring us a step closer to fulfilling a longstanding ambition of artificial intelligence (3): a general game-playing system that can learn to master any game.

F.-H. Hsu, Behind Deep Blue: Building the Computer That Defeated the World Chess Champion (Princeton Univ., 2002).

C. J. Maddison, A. Huang, I. Sutskever, D. Silver, paper presented at the International Conference on Learning Representations 2015, San Diego, CA, 7 to 9 May 2015.

D. N. L. Levy, M. Newborn, How Computers Play Chess (Ishi Press, 2009).

V. Allis, Searching for solutions in games and artificial intelligence, Ph.D. thesis, Transnational University Limburg, Maastricht, Netherlands (1994).

W. Steinitz, The Modern Chess Instructor (Edition Olms, 1990).

E. Lasker, Common Sense in Chess (Dover Publications, 1965).

J. Knudsen, Essential Chess Quotations (iUniverse, 2000).

N. P. Jouppi, C. Young, N. Patil, D. Patterson, G. Agrawal, R. Bajwa, S. Bates, S. Bhatia, N. Boden, A. Borchers, R. Boyle, P. Cantin, C. Chao, C. Clark, J. Coriell, M. Daley, M. Dau, J. Dean, B. Gelb, T. V. Ghaemmaghami, R. Gottipati, W. Gulland, R. Hagmann, C. R. Ho, D. Hogberg, J. Hu, R. Hundt, D. Hurt, J. Ibarz, A. Jaffey, A. Jaworski, A. Kaplan, H. Khaitan, D. Killebrew, A. Koch, N. Kumar, S. Lacy, J. Laudon, J. Law, D. Le, C. Leary, Z. Liu, K. Lucke, A. Lundin, G. MacKean, A. Maggiore, M. Mahony, K. Miller, R. Nagarajan, R. Narayanaswami, R. Ni, K. Nix, T. Norrie, M. Omernick, N. Penukonda, A. Phelps, J. Ross, M. Ross, A. Salek, E. Samadiani, C. Severn, G. Sizikov, M. Snelham, J. Souter, D. Steinberg, A. Swing, M. Tan, G. Thorson, B. Tian, H. Toma, E. Tuttle, V. Vasudevan, R. Walter, W. Wang, E. Wilcox, D. H. Yoon, in Proceedings of the 44th Annual International Symposium on Computer Architecture, Toronto, Canada, 24 to 28 June 2017 (Association for Computing Machinery, 2017), pp. 112.

R. Coulom, in Proceedings of the Sixth International Conference on Computers and Games, Beijing, China, 29 September to 1 October 2008 (Springer, 2008), pp. 113124.

O. Arenz, Monte Carlo chess, masters thesis, Technische Universitt Darmstadt (2012).

O. E. David, N. S. Netanyahu, L. Wolf, in Artificial Neural Networks and Machine LearningICANN 2016, Part II, Barcelona, Spain, 6 to 9 September 2016 (Springer, 2016), pp. 8896.

T. Marsland, Encyclopedia of Artificial Intelligence, S. Shapiro, Ed. (Wiley, 1987).

T. Kaneko, K. Hoki, in Advances in Computer Games 13th International Conference, ACG 2011, Revised Selected Papers, Tilburg, Netherlands, 20 to 22 November 2011 (Springer, 2012), pp. 158169.

M. Lai, Giraffe: Using deep reinforcement learning to play chess, masters thesis, Imperial College London (2015).

R. Ramanujan, A. Sabharwal, B. Selman, in Proceedings of the 26th Conference on Uncertainty in Artificial Intelligence (UAI 2010), Catalina Island, CA, 8 to 11 July (AUAI Press, 2010).

K. He, X. Zhang, S. Ren, J. Sun, in Computer Vision ECCV 2016, 14th European Conference, Part IV, Amsterdam, Netherlands, 11 to 14 October 2016 (Springer, 2016), pp. 630645.

Acknowledgments: We thank M. Sadler for analyzing chess games; Y. Habu for analyzing shogi games; L. Bennett for organizational assistance; B. Konrad, E. Lockhart, and G. Ostrovski for reviewing the paper; and the rest of the DeepMind team for their support. Funding: All research described in this report was funded by DeepMind and Alphabet. Author contributions: D.S., J.S., T.H., and I.A. designed the AlphaZero algorithm with advice from T.G., A.G., T.L., K.S., M.Lai, L.S., and M.Lan.; J.S., I.A., T.H., and M.Lai implemented the AlphaZero program; T.H., J.S., D.S., M.Lai, I.A., T.G., K.S., D.K., and D.H. ran experiments and/or analyzed data; D.S., T.H., J.S., and D.H. managed the project; D.S., J.S., T.H., M.Lai, I.A., and D.H. wrote the paper. Competing interests: DeepMind has filed the following patent applications related to this work: PCT/EP2018/063869, US15/280,711, and US15/280,784. Data and materials availability: A full description of the algorithm in pseudocode as well as details of additional games between AlphaZero and other programs is available in the supplementary materials.

Follow this link:
A general reinforcement learning algorithm that masters ...