Archive for the ‘Alphago’ Category

These 6 Incredible Discoveries From The Past Decade Have Changed Science Forever – ScienceAlert

From finding the building blocks for life on Mars to breakthroughs in gene editing and the rise of artificial intelligence, here are six major scientific discoveries that shaped the 2010s - and what leading experts say could come next

We don't yet know whether there was ever life on Mars - but thanks to a small, six-wheeled robot, we do know the Red Planet was habitable.

Shortly after landing on 6 August 2012, NASA's Curiosity rover discovered rounded pebbles - new evidence that rivers flowed there billions of years ago.

The proof has since multiplied, showing there was in fact a lot of water on Mars - the surface was covered in hot springs, lakes, and maybe even oceans.

A crater on the Red Planet filled with water ice. (ESA/DLR/FU Berlin, CC BY-SA 3.0 IGO)

Curiosity also discovered what NASA calls the building blocks of life, complex organic molecules, in 2014.

And so the hunt continues for signs that Earth-based life is not (or wasn't always) alone.

Two new rovers will be launched next year - America's Mars 2020 and Europe's Rosalind Franklin rovers, looking for ancient microbes.

"Going into the coming decade, Mars research will shift from the question 'Was Mars habitable?' to 'Did (or does) Mars support life?'" said Emily Lakdawalla, a geologist at The Planetary Society.

We had long thought of the little corner of the Universe that we call home as unique, but observations made thanks to the Kepler space telescope blew apart those pretensions.

Launched in 2009, the Kepler mission helped identify more than 2,600 planets outside of our Solar System, also known as exoplanets - and astronomers believe each star has a planet, meaning there are billions out there.

Kepler's successor TESS was launched by NASA in 2018, as we scope out the potential for extraterrestrial life.

Expect more detailed analysis of the chemical composition of these planets' atmospheres in the 2020s, said Tim Swindle, an astrophysicist at the University of Arizona.

We also got our first glimpse of a black hole this year thanks to the groundbreaking work of the Event Horizon Telescope collaboration.

(Event Horizon Telescope Collaboration)

"What I predict is that by the end of the next decade, we will be making high quality real-time movies of black holes that reveal not just how they look, but how they act on the cosmic stage," Shep Doeleman, the project's director, told AFP.

But one event from the decade undoubtedly stood above the rest: the detection for the first time on September 14, 2015 of gravitational waves, ripples in the fabric of the universe.

The collision of two black holes 1.3 billion years earlier was so powerful it spread waves throughout the cosmos that bend space and travel at the speed of light. That morning, they finally reached Earth.

The phenomenon had been predicted by Albert Einstein in his theory of relativity, and here was proof he was right all along.

Three Americans won the Nobel prize in physics in 2017 for their work on the project, and there have been many more gravitational waves detected since.

Cosmologists meanwhile continue to debate the origin and composition of the universe. The invisible dark matter that makes up its vast majority remains one of the greatest puzzles to solve.

"We're dying to know what it might be," said cosmologist James Peebles, who won this year's Nobel prize in physics.

Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) - a family of DNA sequences - is a phrase that doesn't exactly roll off the tongue.

(Meletios Verras/iStock)

But the field of biomedicine can now be divided into two eras, one defined during the past decade: before and after CRISPR-Cas9 (or CRISPR for short), the basis for a gene editing technology.

"CRISPR-based gene editing stands above all the others," William Kaelin, a 2019 Nobel prize winner for medicine, told AFP.

In 2012, Emmanuelle Charpentier and Jennifer Doudna reported that they had developed the new tool that exploits the immune defense system of bacteria to edit the genes of other organisms.

It is much simpler than preceding technology, cheaper and easy to use in small labs.

Charpentier and Doudna were showered in awards. but the technique is also far from perfect and can create unintended mutations.

Experts believe this may have happened to Chinese twins born in 2018 as a result of edits performed by a researcher who was widely criticized for ignoring scientific and ethical norms.

Still, CRISPR remains one of the biggest science stories of recent years, with Kaelin predicting an "explosion" in its use to combat human disease.

For decades, doctors had three main weapons to fight cancer: surgery, chemotherapy drugs, and radiation.

The 2010s saw the rise of a fourth, one that was long doubted: immunotherapy, or leveraging the body's own immune system to target tumor cells.

(Design Cells/iStock)

One of the most advanced techniques is known as CAR T-cell therapy, in which a patient's T-cells - part of their immune system - are collected from their blood, modified and reinfused into the body.

A wave of drugs have hit the market since the mid-2010s for more and more types of cancer including melanomas, lymphomas, leukemias and lung cancers - heralding what some oncologists hope could be a golden era.

For William Cance, scientific director of the American Cancer Society, the next decade could bring new immunotherapies that are "better and cheaper" than what we have now.

The decade began with a major new addition to the human family tree: Denisovans, named after the Denisova Cave in the Altai Mountains of Siberia.

Scientists sequenced the DNA of a female juvenile's finger bone in 2010, finding it was distinct both from genetically modern humans and Neanderthals, our most famous ancient cousins who lived alongside us until around 40,000 years ago.

The mysterious hominin species is thought to have ranged from Siberia to Indonesia, but the only remains have been found in the Altai region and Tibet.

We also learned that, unlike previously assumed, Homo sapiens bred extensively with Neanderthals - and our relatives were not the brutish simpletons previously assumed but were responsible for artworks, such as the handprints in a Spanish cave they were credited for crafting in 2018.

They also wore jewelry, and buried their dead with flowers - just like we do.

Next came Homo naledi, remains of which were discovered in South Africa in 2015, while this year, paleontologists classified yet another species found in the Philippines: a small-sized hominin called Homo luzonensis.

Advances in DNA testing have led to a revolution in our ability to sequence genetic material tens of thousands of years old, helping unravel ancient migrations, like that of the Bronze Age herders who left the steppes 5,000 years ago, spreading Indo-European languages to Europe and Asia.

"This discovery has led to a revolution in our ability to study human evolution and how we came to be in a way never possible before," said Vagheesh Narasimhan, a geneticist at Harvard Medical School.

One exciting new avenue for the next decade is paleoproteomics, which allows scientists to analyze bones millions of years old.

"Using this technique, it will be possible to sort out many fossils whose evolutionary position is unclear," said Aida Gomez-Robles, an anthropologist at University College London.

"Neo" skull of Homo naledi from the Lesedi Chamber. (John Hawks/University of the Witwatersrand)

Machine learning - what we most commonly mean when talking about "artificial intelligence" - came into its own in the 2010s.

Using statistics to identify patterns in vast datasets, machine learning today powers everything from voice assistants to recommendations on Netflix and Facebook.

So-called "deep learning" takes this process even further and begins to mimic some of the complexity of a human brain.

It is the technology behind some of the most eye-catching breakthroughs of the decade: from Google's AlphaGo, which beat the world champion of the fiendishly difficult game Go in 2017, to the advent of real-time voice translations and advanced facial recognition on Facebook.

In 2016, for example, Google Translate - launched a decade earlier - transformed from a service that provided results that were stilted at best, nonsensical at worst, to one that offered translations that were far more natural and accurate.

At times, the results even seemed polished.

"Certainly the biggest breakthrough in the 2010s was deep learning - the discovery that artificial neural networks could be scaled up to many real-world tasks," said Henry Kautz, a computer science professor at the University of Rochester.

"In applied research, I think AI has the potential to power new methods for scientific discovery," from enhancing the strength of materials to discovering new drugs and even making breakthroughs in physics, Kautz said.

For Max Jaderberg, a research scientist at DeepMind, owned by Google's parent company Alphabet, the next big leap will come via "algorithms that can learn to discover information, and rapidly adapt and internalize and act on this new knowledge," as opposed to depending on humans to feed them the correct data.

That could eventually pave the way to "artificial general intelligence", or a machine capable of performing any tasks humans can, rather than excelling at a single function.

Agence France-Presse

Read more here:

These 6 Incredible Discoveries From The Past Decade Have Changed Science Forever - ScienceAlert

AI has bested chess and Go, but it struggles to find a diamond in Minecraft – The Verge

Whether were learning to cook an omelet or drive a car, the path to mastering new skills often begins by watching others. But can artificial intelligence learn the same way? A new challenge teaching AI agents to play Minecraft suggests its much trickier for computers.

Announced earlier this year, the MineRL competition asked teams of researchers to create AI bots that could successfully mine a diamond in Minecraft. This isnt an impossible task, but it does require a mastery of the games basics. Players need to know how to cut down trees, craft pickaxes, and explore underground caves while dodging monsters and lava. These are the sorts of skills that most adults could pick up after a few hours of experimentation or learn much faster by watching tutorials on YouTube.

But of the 660 entries in the MineRL competition, none were able to complete the challenge, according to results that will be announced at the AI conference NeurIPS and that were first reported by BBC News. Although bots were able to learn intermediary steps, like constructing a furnace to make durable pickaxes, none successfully found a diamond.

The task we posed is very hard, Katja Hofmann, a principal researcher at Microsoft Research, which helped organize the challenge, told BBC News. While no submitted agent has fully solved the task, they have made a lot of progress and learned to make many of the tools needed along the way.

This may be a surprise, especially when you think that AI has managed to best humans at games like chess, Go, and Dota 2. But it reflects important limitations of the technology as well as restrictions put in place by MineRLs judges to really challenge the teams.

The bots in MineRL had to learn using a combination of methods known as imitation learning and reinforcement learning. In imitation learning, agents are shown data of the task ahead of them, and they try to imitate it. In reinforcement learning, theyre simply dumped into a virtual world and left to work things out for themselves using trial and error.

Often, AI is only able to take on big challenges by combining these two methods. The famous AlphaGo system, for example, first learned to play Go by being fed data of old games. It then honed its skills and surpassed all humans by playing itself over and over.

The MineRL bots took a similar approach, but the resources available to them were comparatively limited. While AI agents like AlphaGo are created with huge datasets, powerful computer hardware, and the equivalent of decades of training time, the MineRL bots had to make do with just 1,000 hours of recorded gameplay to learn from, a single Nvidia graphics processor to train with, and just four days to get up to speed.

Its the difference between the resources available to an MLB team coaches, nutritionists, the finest equipment money can buy and what a Little League squad has to make do with.

It may seem unfair to hamstring the MineRL bots in this way, but these constraints reflect the challenges of integrating AI into the real world. While bots like AlphaGo certainly push the boundary of what AI can achieve, very few companies and research labs can match the resources of Google-owned DeepMind.

The competitions lead organizer, Carnegie Mellon University PhD student William Guss, told BBC News that the challenge was meant to show that not every AI problem should be solved by throwing computing power at it. This mindset, said Guss, works directly against democratizing access to these reinforcement learning systems, and leaves the ability to train agents in complex environments to corporations with swathes of compute.

So while AI may be struggling in Minecraft now, when it cracks this challenge, itll hopefully deliver benefits to a wider audience. Just dont think about those poor Minecraft YouTubers who might be out of a job.

Read more from the original source:

AI has bested chess and Go, but it struggles to find a diamond in Minecraft - The Verge

MuZero figures out chess, rules and all – Chessbase News

12/12/2019 Just imagine you had a chess computer the auto-sensor kind. Would someone who had no knowledge of the game be able to work it out, just by moving pieces. Or imagine you are a very powerful computer. By looking at millions of images of chess games would you be able to figure out the rules and learn to play the game proficiently? The answer is yes because that has just been done by Google's Deep Mind team. For chess and 76 other games. It is interesting, and slightly disturbing. | Graphic: DeepMind

ChessBase 15 - Mega package

Find the right combination! ChessBase 15 program + new Mega Database 2020 with 8 million games and more than 80,000 master analyses. Plus ChessBase Magazine (DVD + magazine) and CB Premium membership for 1 year!

More...

In 1980 the first chess computer with an auto response board, the Chafitz ARB Sargon 2.5, was released. It was programmed by Dan and Kathe Spracklen and had a sensory board and magnet pieces. The magnets embedded in the pieces were all the same kind, so that the board could only detect whether there was a piece on the square or not. It would signal its moves with LEDs located on the corner of each square.

Chafitz ARB Sargon 2.5 | Photo:My Chess Computers

Some years after the release of this computer I visited the Spracklens in their home in San Diego, and one evening had an interesting discussion, especially with Kathy. What would happen, we wondered, if we set up a Sargon 2.5 in a jungle village where nobody knew chess. If we left the people alone with the permanently switched-on board and pieces, would they be able to figure out the game? If they lifted a piece, the LED on that square would light up; if they put it on another square that LED would light up briefly. If the move was legal, there would be a reassuring beep; the square of a piece of the opposite colour would light up, and if they picked up that piece another LED would light up. If the original move wasnt legal, the board would make an unpleasant sound.

Our question was: could they figure out, by trial and error, how chess was played? Kathy and I discussed it at length, over the Sargon board, and in the end came to the conclusion that it was impossible they could never figure out the game without human instructions. Chess is far too complex.

Now, three decades later, I have to modify our conclusion somewhat: maybe humans indeed cannot learn chess by pure trial and error, but computers can...

You remember how AlphaGo and AlphaZero were created, by Google's DeepMind division. The programs Leela and Fat Fritz were generated using the same principle: tell an AI program the rules of the game, how the pieces move, and then let it play millions of games against itself. The program draws its own conclusions about the game and starts to play master-level chess. In fact, it can be argued that these programs are the strongest entities to have ever played chess human or computer.

Now DeepMind has come up with a fairly atrocious (but scientifically fascinating) idea: instead of telling the AI software the rules of the game, just let it play, using trial and error. Let it teach itself the rules of the game, and in the process learn to play it professionally. DeepMind combined a tree-based search (where a tree is a data structure used for locating information from within a set) with a learning model. They called the project MuZero. The program must predict the quantities most relevant to game planning not just for chess, but for 57 different Atari games. The result: MuZero, we are told, matches the performance of AlphaZero in Go, chess, and shogi.

And this is how MuZero works (description from VenturBeat):

Fundamentally MuZero receives observations images of a Go board or Atari screen and transforms them into a hidden state. This hidden state is updated iteratively by a process that receives the previous state and a hypothetical next action, and at every step the model predicts the policy (e.g., the move to play), value function (e.g., the predicted winner), and immediate reward (e.g., the points scored by playing a move)."

Evaluation of MuZero throughout training in chess, shogi, Go, and Atari the y-axis shows Elo rating| Image: DeepMind

As the DeepMind researchers explain, one form of reinforcement learning the technique in which rewards drive an AI agent toward goals involves models. This form models a given environment as an intermediate step, using a state transition model that predicts the next step and a reward model that anticipates the reward. If you are interested in this subject you can read thearticle on VenturBeat,or visit the Deep Mind site. There you can read this paper on the general reinforcement learning algorithm that masters chess, shogi and Go through self-play. Here's an abstract:

The game of chess is the longest-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. By contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go by reinforcement learning from self-play. In this paper, we generalize this approach into a single AlphaZero algorithm that can achieve superhuman performance in many challenging games. Starting from random play and given no domain knowledge except the game rules, AlphaZero convincingly defeated a world champion program in the games of chess and shogi (Japanese chess), as well as Go.

That refers to the original AlphaGo development, which has now been extended to MuZero. Turns out it is possible not just to become highly proficient at a game by playing it a million times against yourself, but in fact it is possible to work out the rules of the game by trial and error.

I have just now learned about this development and need to think about the consequences discuss it with experts. My first somewhat flippant reaction to a member of the Deep Mind team: "What next? Show it a single chess piece and it figures out the whole game?"

Link:

MuZero figures out chess, rules and all - Chessbase News