Archive for the ‘Alphazero’ Category

Why asking an AI to explain itself can make things worse – MIT Technology Review

Upol Ehsan once took a test ride in an Uber self-driving car. Instead of fretting about the empty drivers seat, anxious passengers were encouraged to watch a pacifier screen that showed a cars-eye view of the road: hazards picked out in orange and red, safe zones in cool blue.

For Ehsan, who studies the way humans interact with AI at the Georgia Institute of Technology in Atlanta, the intended message was clear: Dont get freaked outthis is why the car is doing what its doing. But something about the alien-looking street scene highlighted the strangeness of the experience rather than reassuring. It got Ehsan thinking: what if the self-driving car could really explain itself?

The success of deep learning is due to tinkering: the best neural networks are tweaked and adapted to make better ones, and practical results have outpaced theoretical understanding. As a result, the details of how a trained model works are typically unknown. We have come to think of them as black boxes.

A lot of the time were okay with that when it comes to things like playing Go or translating text or picking the next Netflix show to binge on. But if AI is to be used to help make decisions in law enforcement, medical diagnosis, and driverless cars, then we need to understand how it reaches those decisionsand know when they are wrong.

People need the power to disagree with or reject an automated decision, says Iris Howley, a computer scientist at Williams College in Williamstown, Massachusetts. Without this, people will push back against the technology. You can see this playing out right now with the public response to facial recognition systems, she says.

Sign up for The Algorithm artificial intelligence, demystified

Ehsan is part of a small but growing group of researchers trying to make AIs better at explaining themselves, to help us look inside the black box. The aim of so-called interpretable or explainable AI (XAI) is to help people understand what features in the data a neural network is actually learningand thus whether the resulting model is accurate and unbiased.

One solution is to build machine-learning systems that show their workings: so-called glassboxas opposed to black-boxAI. Glassbox models are typically much-simplified versions of a neural network in which it is easier to track how different pieces of data affect the model.

There are people in the community who advocate for the use of glassbox models in any high-stakes setting, says Jennifer Wortman Vaughan, a computer scientist at Microsoft Research. I largely agree. Simple glassbox models can perform as well as more complicated neural networks on certain types of structured data, such as tables of statistics. For some applications that's all you need.

But it depends on the domain. If we want to learn from messy data like images or text, were stuck with deepand thus opaqueneural networks. The ability of these networks to draw meaningful connections between very large numbers of disparate features is bound up with their complexity.

Even here, glassbox machine learning could help. One solution is to take two passes at the data, training an imperfect glassbox model as a debugging step to uncover potential errors that you might want to correct. Once the data has been cleaned up, a more accurate black-box model can be trained.

It's a tricky balance, however. Too much transparency can lead to information overload. In a 2018 study looking at how non-expert users interact with machine-learning tools, Vaughan found that transparent models can actually make it harder to detect and correct the models mistakes.

Another approach is to include visualizations that show a few key properties of the model and its underlying data. The idea is that you can see serious problems at a glance. For example, the model could be relying too much on certain features, which could signal bias.

These visualization tools have proved incredibly popular in the short time theyve been around. But do they really help? In the first study of its kind, Vaughan and her team have tried to find outand exposed some serious issues.

The team took two popular interpretability tools that give an overview of a model via charts and data plots, highlighting things that the machine-learning model picked up on most in training. Eleven AI professionals were recruited from within Microsoft, all different in education, job roles, and experience. They took part in a mock interaction with a machine-learning model trained on a national income data set taken from the 1994 US census. The experiment was designed specifically to mimic the way data scientists use interpretability tools in the kinds of tasks they face routinely.

What the team found was striking. Sure, the tools sometimes helped people spot missing values in the data. But this usefulness was overshadowed by a tendency to over-trust and misread the visualizations. In some cases, users couldnt even describe what the visualizations were showing. This led to incorrect assumptions about the data set, the models, and the interpretability tools themselves. And it instilled a false confidence about the tools that made participants more gung-ho about deploying the models, even when they felt something wasnt quite right. Worryingly, this was true even when the output had been manipulated to show explanations that made no sense.

To back up the findings from their small user study, the researchers then conducted an online survey of around 200 machine-learning professionals recruited via mailing lists and social media. They found similar confusion and misplaced confidence.

Worse, many participants were happy to use the visualizations to make decisions about deploying the model despite admitting that they did not understand the math behind them. It was particularly surprising to see people justify oddities in the data by creating narratives that explained them, says Harmanpreet Kaur at the University of Michigan, a coauthor on the study. The automation bias was a very important factor that we had not considered.

Ah, the automation bias. In other words, people are primed to trust computers. Its not a new phenomenon. When it comes to automated systems from aircraft autopilots to spell checkers, studies have shown that humans often accept the choices they make even when they are obviously wrong. But when this happens with tools designed to help us avoid this very phenomenon, we have an even bigger problem.

What can we do about it? For some, part of the trouble with the first wave of XAI is that it is dominated by machine-learning researchers, most of whom are expert users of AI systems. Says Tim Miller of the University of Melbourne, who studies how humans use AI systems: The inmates are running the asylum.

This is what Ehsan realized sitting in the back of the driverless Uber. It is easier to understand what an automated system is doingand see when it is making a mistakeif it gives reasons for its actions the way a human would. Ehsan and his colleague Mark Riedl are developing a machine-learning system that automatically generates such rationales in natural language. In an early prototype, the pair took a neural network that had learned how to play the classic 1980s video game Frogger and trained it to provide a reason every time it made a move.

Upol Ehsan

To do this, they showed the system many examples of humans playing the game while talking out loud about what they were doing. They then took a neural network for translating between two natural languages and adapted it to translate instead between actions in the game and natural-language rationales for those actions. Now, when the neural network sees an action in the game, it translates it into an explanation. The result is a Frogger-playing AI that says things like Im moving left to stay behind the blue truck every time it moves.

Ehsan and Riedls work is just a start. For one thing, it is not clear whether a machine-learning system will always be able to provide a natural-language rationale for its actions. Take DeepMinds board-game-playing AI AlphaZero. One of the most striking features of the software is its ability to make winning moves that most human players would not think to try at that point in a game. If AlphaZero were able to explain its moves, would they always make sense?

Reasons help whether we understand them or not, says Ehsan: The goal of human-centered XAI is not just to make the user agree to what the AI is sayingit is also to provoke reflection. Riedl recalls watching the livestream of the tournament match between DeepMind's AI and Korean Go champion Lee Sedol. The commentators were talking about what AlphaGo was seeing and thinking. "That wasnt how AlphaGo worked," says Riedl. "But I felt that the commentary was essential to understanding what was happening."

What this new wave of XAI researchers agree on is that if AI systems are to be used by more people, those people must be part of the design from the startand different people need different kinds of explanations. (This is backed up by a new study from Howley and her colleagues, in which they show that peoples ability to understand an interactive or static visualization depends on their education levels.) Think of a cancer-diagnosing AI, says Ehsan. Youd want the explanation it gives to an oncologist to be very different from the explanation it gives to the patient.

Ultimately, we want AIs to explain themselves not only to data scientists and doctors but to police officers using face recognition technology, teachers using analytics software in their classrooms, students trying to make sense of their social-media feedsand anyone sitting in the backseat of a self-driving car. Weve always known that people over-trust technology, and thats especially true with AI systems, says Riedl. The more you say its smart, the more people are convinced that its smarter than they are.

Explanations that anyone can understand should help pop that bubble.

See more here:
Why asking an AI to explain itself can make things worse - MIT Technology Review

What will happen when robots have taken all the jobs? – Telegraph.co.uk

To some this will sound like a nanny-state hellscape, and Susskind does not shy from calling his proposed solution The Big State. He does not, however, go into detail about how exactly the community will decide which activities are worthy of payment. Perhaps we will be subject to the tyranny of a slim majority that decides dog-breeding, classical music or literary criticism are valueless activities, in which case no one will ever do them again.

But the moral objection to UBI that it will encourage laziness and anomie is always at bottom a puritan condescension. If one asked Susskind whether, if he never had to worry about money, he would just spend all day watching reruns of Bake Off and slumping into potato-ish ennui, he would probably deny it. So why assume it of everyone else?

As it turns out, Bertrand Russell anticipated this objection 90 years ago: It will be said that while a little leisure is pleasant, men would not know how to fill their days if they had only four hours work out of the 24. Insofar as this is true in the modern world it is a condemnation of our civilisation; it would not have been true at any earlier period. There was formerly a capacity for light-heartedness and play which has been to some extent inhibited by the cult ofefficiency.

Modern sceptics might still dismiss Russells argument as a Fabian pipe-dream, but the cult of efficiency is still very much abroad, and it is indeed what is driving the race to automation. Susskinds careful analysis shows that it will be an increasingly unignorable problem, even if his proposed solution will not convince everyone. At the last gasp, he even drops in the alarming recommendation that our future politicians should guide us on what it means to live a flourishing life, in the face of which prospect one might after all be happier to resign oneself to a robot apocalypse.

A World Without Work is published by Allen Lane at 20. To order your copy for 16.99, call 0844 871 1514 or visit the Telegraph Bookshop

View original post here:
What will happen when robots have taken all the jobs? - Telegraph.co.uk

AI Can Do Great Thingsif It Doesn’t Burn the Planet – WIRED

Last month, researchers at OpenAI in San Francisco revealed an algorithm capable of learning, through trial and error, how to manipulate the pieces of a Rubik's Cube using a robotic hand. It was a remarkable research feat, but it required more than 1,000 desktop computers plus a dozen machines running specialized graphics chips crunching intensive calculations for several months.

The effort may have consumed about 2.8 gigawatt-hours of electricity, estimates Evan Sparks, CEO of Determined AI, a startup that provides software to help companies manage AI projects. Thats roughly equal to the output of three nuclear power plants for an hour. A spokesperson for OpenAI questioned the calculation, noting that it makes several assumptions. But OpenAI declined to disclose further details of the project or offer an estimate of the electricity it consumed.

Artificial intelligence routinely produces startling achievements, as computers learn to recognize images, converse, beat humans at sophisticated games, and drive vehicles. But all those advances require staggering amounts of computing powerand electricityto devise and train algorithms. And as the damage caused by climate change becomes more apparent, AI experts are increasingly troubled by those energy demands.

The concern is that machine-learning algorithms in general are consuming more and more energy, using more data, training for longer and longer, says Sasha Luccioni, a postdoctoral researcher at Mila, an AI research institute in Canada.

Its not just a worry for academics. As more companies across more industries begin to use AI, theres growing fear that the technology will only deepen the climate crisis. Sparks says that Determined.ai is working with a pharmaceutical firm thats already using huge AI models. As an industry, its worth thinking about how we want to combat this, he adds.

Some AI researchers are thinking about it. Theyre using tools to track the energy demands of their algorithms, or taking steps to offset their emissions. A growing number are touting the energy efficiency of their algorithms in research papers and at conferences. As the costs of AI rise, the AI industry is developing a new appetite for algorithms that burn fewer kilowatts.

The concern is that machine-learning algorithms in general are consuming more and more energy, using more data, training for longer and longer.

Sasha Luccioni, Mila

Luccioni recently helped launch a website that lets AI researchers roughly calculate the carbon footprint of their algorithms. She is also testing a more sophisticated approachcode that can be added to an AI program to track the energy use of individual computer chips. Luccioni and others are also trying to persuade companies that offer tools for tracking the performance of code to include some measure of energy or carbon footprint. Hopefully this will go toward full transparency, she says. So that people will include in the footnotes we emitted X tons of carbon, which we offset.

The energy required to power cutting-edge AI has been on a steep upward curve for some time. Data published by OpenAI shows that the computing power required for key AI landmarks over the past few years, such as DeepMinds Go-playing program AlphaZero, has doubled roughly every 3.4 monthsincreasing 300,000 times between 2012 and 2018. Thats faster than the rate at which computing power historically increased, the phenomenon known as Moores Law (named after Gordon Moore, cofounder of Intel.)

Recent advances in natural language processingan AI technique that helps machines parse, interpret, and generate texthave proven especially power-hungry. A research paper from a team at UMass Amherst found that training a single large NLP model may consume as much energy as a car over its entire lifetimeincluding the energy needed to build it.

Training a powerful machine-learning algorithm often means running huge banks of computers for days, if not weeks. The fine-tuning required to perfect an algorithm, by for example searching through different neural network architectures to find the best one, can be especially computationally intensive. For all the hand-wringing, though, it remains difficult to measure how much energy AI actually consumes, and even harder to predict how much of a problem it could become.

Read more:
AI Can Do Great Thingsif It Doesn't Burn the Planet - WIRED

Chess: Magnus Carlsen to face arch rival Anish Giri in opening round at Wijk – The Guardian

The world champion, Magnus Carlsen, starts his 2020 campaign on Saturday when he meets his arch rival Anish Giri in the opening round at Tata Steel Wijk aan Zee, the traditional Dutch tournament which he has dominated ever since he won its C group aged 13. In his past eight Masters appearances there Carlsen has won seven times and placed second once.

Carlsen and Giri have had some sharp clashes on Twitter, and a highlight of the 29-year-old Norwegians interview with the Guardian on Thursday was his relish in recounting how he psychologically crushed the Dutch champion when they met at Zagreb last summer.

The Wijk pairings have been kind to Carlsen in his quest to set a world record streak of 111 games unbeaten, breaking Sergei Tiviakovs mark of 110 against lesser opponents in 2004-05. He is in the top half of the draw with an extra White, and will hope for full points from some of his next opponents Yu Yangyi, Jeffery Xiong and Jorden van Foreest.

Carlsen is in the best form of his career after his vintage 2019 when he won 10 elite events, was unbeaten in classical play, held three global crowns, and in his spare time briefly reached No 1 in Fantasy Premier League. The fifth round at Wijk will be played in PSV Eindhovens Philips Stadion. Rounds start at 1.30pm and are free and live to watch online with grandmaster and computer commentaries.

Last summer, when Carlsen triumphed in Zagreb, where his game was zestful and sharp after his work with AlphaZero, he looked ready to break his own record rating of 2889 points and go for a round 2900. That proved a bridge too far and he starts the year at 2872. He will not achieve all those 28 points at Wijk but a strong performance there would set up another shot at the record in the spring.

Dangers abound. Fabiano Caruana, the world No 2, chose a lower profile in 2019 but will aim at a good start to the year before the candidates in March where the American aims to qualify for a world title rematch and avenge his defeat in 2018. Wesley So, the winner at Wijk 2017 in Carlsens only blemish, crushed him in Oslo for the Fischer Random title.

Alireza Firouzja, the 16-year-old whose world blitz game against Carlsen sparked a huge controversy, will aim to match Bobby Fischer, Boris Spassky and Carlsen himself, who all showed their world class at that age.

Tata Steel Wijk is such a reliably classic fixture to launch the chess year that it is easy to forget that its future is not assured. Steel production is in severe decline in Europe, and 2019 was a poor year for Tata Steel Netherlands. In November the Indian multinational announced job cuts which may involve nearly 20% of its 9,000 Dutch workforce.

Chinas Ju Wenjun, 28, took a 2.5-1.5 lead on Thursday in her womens world title defence against Russias Aleksandra Goryachkina, 21. Their 12-game series has a record prize fund 500,000 for any womens world championship, though this is still only a fraction of what Carlsen and his challenger will earn later this year. The first half is in Shanghai, with a 7.30am start, and the second half at Vladivostok from 5.30am.

For most of this century the womens title has been decided by a 64-player knockout, leading to a rapid turnover of champions, but the format has now reverted to a candidates tournament and a title match. Nigel Short is trebling up as chairman of the appeals committee, official match commentator and Fide representative.

The womens match which would attract most interest from chess fans, between the two clearly best players of all time, has never happened except for a single game in the 2012 Gibraltar Open. Judit Polgar v Hou Yifan is the female version of Bobby Fischer v Anatoly Karpov, the legends match that never was. It could still happen if Rex Sinquefield, who organises many similar events at St Louis, gets involved.

3653 After 1...Rd6+? 2 Kc3 Qf3+ 3 Qe3 Blacks checks ran out and White won with his extra rook. Instead 1...Qf3+! wins after 2 Kc4 (2 Kc2 Re2 wins Q for R) Re4+! 3 Kc5 Qa3+! when 4 Rb4 a5 and 4 Kxc6 Qa6+ both win a rook when Blacks extra pawns decide.

Read the original here:
Chess: Magnus Carlsen to face arch rival Anish Giri in opening round at Wijk - The Guardian

Chess: Carlsen wins speed titles after controversial game with rising star – The Guardian

Magnus Carlsen ended his vintage year of 2019 as he began it, as a superb all-round player who outclasses his rivals. Carlsen won at Wijk in January last year and at Moscow in December where he took both the world 30-minute rapid and the five-minute blitz crowns, losing only one game out of 38.

Overall the Norwegian, 29, won 10 elite tournaments over the year, with just two odd failures at speed in St Louis and at Fischer Random in Oslo. The standout difference between todays champion and Bobby Fischer and Garry Kasparov is that Carlsen has been far more active than the other legends in their peak years, taking on new challenges with hardly a break. And in his spare time he briefly became world No 1 in Fantasy Premier League. True, Kasparov was No 1 for some 21 years while Carlsen is eight years and counting.

Carlsens style has become sharper since he worked in 2018 with AlphaZero and the creative tactician Daniil Dubov: For me it is easier to play for a win. Perhaps the others risk more if they do so. I think thats the brutal truth. If you are a bit better you can afford to take more risks.

It will be different in 2020, as Carlsen has already announced: I will definitely play less. I have played a lot this year and my level of energy has become empty at the end. Not realistic to play as much in 2020, he said.

Three major targets remain. At Tata Steel Wijk aan Zee starting on 11 January he can break Sergei Tiviakovs record of 110 classical games unbeaten. Carlsen missed out on a 2900 classical rating despite getting near it in mid-year, so this can be a 2020 target. His current rating is 2872 and his all-time peak remains at 2889.

Perhaps most of all, Carlsen will want to defend his title more convincingly than in 2014, when with the scores level at 2.5 each Vishy Anand missed a simple winning chance, or 2016 and 2018 when the classical scores were tied at 6-6 before Carlsen defeated Sergey Karjakin and Fabiano Caruana in speed tie-breaks. As of now, Caruana and Chinas Ding Liren are the favourites to win the candidates in March and Carlsen respects them both as serious contenders.

Aside from Carlsen, the main talking point at Moscow was Alireza Firouzja, who quit his native Iran due to its ban on playing Israelis and will probably represent France, where he now lives.

The 16-year-old is already perceived as a potential world title challenger in the mid 2020s, so the dramatic end to his blitz game with the champion, where he missed several wins before his controversial loss on time, has become compulsive viewing.

The final position, where Carlsen had a lone bishop and a tablebase draw, was a loss for Firouzja under Fide rules because a mating position was legally possible. The teen often plays blitz games on websites where the rule is different, so that WK a8 WP a6 v BK c7 BN c8 with White to move and 1 a7 Nb6 mate is forced, may become a draw online if White loses on time and the server then decrees that Black lacks mating material.

Firouzja requested to see the Fide rule in print, an action paralleled long ago when Yuri Averbakh and Viktor Korchnoi were not sure of the rules on castling. His appeal against the result was doomed to fail because he had not complained during the game when he alleged he was disturbed by Carlsen speaking in Norwegian. Carlsen was magnanimous afterwards, but such incidents can have lasting effects on relationships between players.

Hastings has its final two rounds on Saturday and Sunday afternoon (2.15pm start). Online viewing is available on two different sites and includes computer commentary.

3652 1...Bxg2+! 2 Rxg2 and now Duda fell for 2...Re1+?? 3 Rg1 Qc1 4 Rxh5+! Instead 2...Qc1+! 3 Qg1 (3 Rg1 Rxh2+) Re1 wins for Black.

Read more:
Chess: Carlsen wins speed titles after controversial game with rising star - The Guardian