Archive for the ‘Alphago’ Category

Why the buzz around DeepMind is dissipating as it transitions from games to science – CNBC

Google Deepmind head Demis Hassabis speaks during a press conference ahead of the Google DeepMind Challenge Match in Seoul on March 8, 2016.

Jung Yeon-Je | AFP |Getty Images | Getty Images

In 2016, DeepMind, an Alphabet-owned AI unit headquartered in London, was riding a wave of publicity thanks to AlphaGo, its computer program that took on the best player in the world at the ancient Asian board game Go and won.

Photos of DeepMind's leader, Demis Hassabis, were splashed across the front pages of newspapers and websites, and Netflix even went on to make a documentary about the five-game Go match between AlphaGo and world champion Lee SeDol. Fast-forward four years, and things have gone surprisingly quiet about DeepMind.

"DeepMind has done some of the most exciting things in AI in recent years. It would be virtually impossible for any company to sustain that level of excitement indefinitely," said William Tunstall-Pedoe, a British entrepreneur who sold his AI start-up Evi to Amazon for a reported $26 million. "I expect them to do further very exciting things."

AI pioneer Stuart Russell, a professor at the University of California, Berkeley, agreed it was inevitable that excitement around DeepMind would tail off after AlphaGo.

"Go was a recognized milestone in AI, something that some commentators said would take another 100 years," he said. "In Asia in particular, top-level Go is considered the pinnacle of human intellectual powers. It's hard to see what else DeepMind could do in the near term to match that."

DeepMind's army of 1,000 plus people, which includes hundreds of highly-paid PhD graduates, continues to pump out academic paper after academic paper, but only a smattering of the work gets picked up by the mainstream media. The research lab has churned out over 1,000 papers and 13 of them have been published by Nature or Science, which are widely seen as the world's most prestigious academic journals. Nick Bostrom, the author of Superintelligence and the director of the University of Oxford's Future of Humanity Institute described DeepMind's team as world-class, large, and diverse.

"Their protein folding work was super impressive," said Neil Lawrence, a professor of machine learning at the University of Cambridge, whose role is funded by DeepMind. He's referring to a competition-winning DeepMind algorithm that can predict the structure of a protein based on its genetic makeup. Understanding the structure of proteins is important as it could make it easier to understand diseases and create new drugs in the future.

The World's top human Go player, 19-year-old Ke Jie (L) competes against AI program AlphaGo, which was developed by DeepMind, the artificial intelligence arm of Google's parent Alphabet. Machine won the three-game match against man in 2017. The AI didn't lose a single game.

VCG | Visual China Group | Getty Images

DeepMind is keen to move away from developing relatively "narrow" so-called "AI agents," that can do one thing well, such as master a game. Instead, the company is trying to develop more general AI systems that can do multiple things well, and have real world impact.

It's particularly keen to use its AI to leverage breakthroughs in other areas of science including healthcare, physics and climate change.

But the company's scientific work seems to be of less interest to the media.In 2016, DeepMind was mentioned in 1,842 articles, according to media tracker LexisNexis. By 2019, that number had fallen to 1,363.

One ex-DeepMinder said the buzz around the company is now more in line with what it should be. "The whole AlphaGo period was nuts," they said. "I think they've probably got another few milestones ahead, but progress should be more low key. It's a marathon not a sprint, so to speak."

DeepMind denied that excitement surrounding the company has tailed off since AlphaGo, pointing to the fact that it has had more papers in Nature and Science in recent years.

"We have created a unique environment where ambitious AI research can flourish. Our unusually interdisciplinary approach has been core to our progress, with 13 major papers in Nature and Science including 3 so far this year," a DeepMind spokesperson said. "Our scientists and engineers have built agents that can learn to cooperate, devise new strategies to play world-class chess and Go, diagnose eye disease, generate realistic speech now used in Google products around the world, and much more."

"More recently, we've been excited to see early signs of how we could use our progress in fundamental AI research to understand the world around us in a much deeper way. Our protein folding work is our first significant milestone applying artificial intelligence to a core question in science, and this is just the start of the exciting advances we hope to see more of over the next decade, creating systems that could provide extraordinary benefits to society."

The company, which competes with Facebook AI Research and OpenAI, did a good job of building up hype around what it was doing in the early days.

Hassabis and Mustafa Suleyman, the intellectual co-founders who have been friends since school, gave inspiring speeches where they would explain how they were on a mission to "solve intelligence" and use that to solve everything else.

There was also plenty of talk of developing "artificial general intelligence" or AGI, which has been referred to as the holy grail in AI and is widely viewed as the point when machine intelligence passes human intelligence.

But the speeches have become less frequent (partly because Suleyman left Deepmind and works for Google now), and AGI doesn't get mentioned anywhere near as much as it used to.

Larry Page, left, and Sergey Brin, co-founders of Google Inc.

JB Reed | Bloomberg | Getty Images

Google co-founders Larry Page and Sergey Brin were huge proponents of DeepMind and its lofty ambitions, but they left the company last year and its less obvious how Google CEO Sundar Pichai feels about DeepMind and AGI.

It's also unclear how much free reign Pichai will give the company, which cost Alphabet $571 million in 2018. Just one year earlier, the company had losses of $368 million.

"As far as I know, DeepMind is still working on the AGI problem and believes it is making progress," Russell said. "I suspect the parent company (Google/Alphabet) got tired of the media turning every story about Google and AI into the Terminator scenario, complete with scary pictures."

One academic who is particularly skeptical about DeepMind's achievements is AI entrepreneur Gary Marcus, who sold a machine-learning start-up to Uber in 2016 for an undisclosed sum.

"I think they realize the gulf between what they're doing and what they aspire to do," he said. "In their early years they thought that the techniques they were using would carry us all the way to AGI. And some of us saw immediately that that wasn't going to work. It took them longer to realize but I think they've realized it now."

Marcus said he's heard that DeepMind employees refer to him as the "anti-Christ" because he has questioned how far the "deep learning" AI technique that DeepMind has focused on can go.

"There are major figures now that recognize that the current techniques are not enough," he said. "It's very different from two years ago. It's a radical shift."

He added that while DeepMind's work on games and biology had been impressive, it's had relatively little impact.

"They haven't used their stuff much in the real world," he said. "The work that they're doing requires an enormous amount of data and an enormous amount of compute, and a very stable world. The techniques that they're using are very, very data greedy and real-world problems often don't supply that level of data."

Excerpt from:
Why the buzz around DeepMind is dissipating as it transitions from games to science - CNBC

The Hardware in Microsofts OpenAI Supercomputer Is Insane – ENGINEERING.com

The Hardware in Microsofts OpenAI Supercomputer Is InsaneAndrew Wheeler posted on June 02, 2020 | The benefit to Elon Musks organization is not yet clear.

(Image courtesy of Microsoft.)

OpenAI, the San Francisco-based research laboratory founded by serial entrepreneur Elon Musk, is dedicated to ensure that artificial general intelligence benefits all of humanity. Microsoft invested $1 billion in OpenAI in June 2019 to build a platform of unprecedented scale. Recently, Microsoft pulled back the curtain on this project to reveal that its OpenAI supercomputer is up and running. Its powered by an astonishing 285,000 CPU cores and 10,000 GPUs.

The announcement was made at Microsofts Build 2020 developer conference. The OpenAI supercomputer is hosted by Microsofts Azure cloud and will be used to test massive artificial intelligence (AI) models.

Many AI supercomputing research projects focus on perfecting single tasks using deep learning or deep reinforcement learning as is the case with Googles various DeepMind projects like AlphaGo Zero. But a new wave of AI research focuses on how these supercomputers can perfect multiple tasks simultaneously. At the conference, Microsoft mentioned a few of these tasks that its AI supercomputer could tackle. These include having the companys AI supercomputer possibly examine huge datasets of code from GitHub (which Microsoft acquired in 2018 for $7.5 billion worth of stock) to artificially generate its own code. Another multitasking AI function could be the moderation of game-streaming services, according to Microsoft.

But is OpenAI going to benefit from this development? How would these services use Microsofts OpenAI supercomputer?

Users of Microsoft Teams benefit from real-time captioning via Microsofts development of Turing models for natural language processing and generation, so maybe OpenAI will pursue more natural language processing projects. But the answer is unknown at this point.

(Video courtesy of Microsoft.)

Bottom Line

Large-scale AI implementations from powerful and ultra-wealthy tech giants like Microsoft with access to tremendous datasets (this is the key for advanced AI beyond powerful software) could lead to the development of an AI programmer using the vast repositories of code on GitHub.

Microsofts Turing models for natural language processing use over 17 billion parameters for deciphering language. The number of CPUs and GPUs in Microsofts AI supercomputer is almost as staggering as the potential applications the company could create with access to such vast computing power. On that one note, Microsoft announced that its Turing models for natural language generation will become open source for human developers to use in the near future, but no exact date has been given.

Read the original post:
The Hardware in Microsofts OpenAI Supercomputer Is Insane - ENGINEERING.com

Latest Tech News This Week: Zoom Hit With Security Woes, Cyber Attacks on Healthcare Ramp Up|Weekly Rundown – Toolbox

Here Are This Weeks Top Stories:1. Collaboration: Zooms Stock Is Skyrocketing But Is It Secure? 2. Security: Healthcare Hit By COVID-19 Cyber Attack 3. IT Strategy: Washington Signs Facial Tech Into Law Zoom's Stock Is Skyrocketing But Is It Secure?

Even as Zoom's active user count scales everyday with U.S. volumes touching 4.84 million on Monday, concerns around the solutions's security credentials have risen significantly. Elon Musk founded SpaceX shunned the videoconferencing app, citing significant privacy and security concerns. New York's Attorney General wrote to Zoom about its ability to secure massive workloads.

Big Picture: While stock is soaring for Zoom amid a global meltdown, the lack of end-to-end encryption will impact user growth. "It is not possible to enable E2E encryption for Zoom video meetings", Zoom spokesperson reportedly shared with The Intercept. Even though Zoom secures audio and video meetings using TCP and UDP connections, it can access unencrypted video and audio content of meetings. Another downsize Zoom sells user data to advertisers for targeted marketing.

Our Take: Considering that NASA has prohibited its employees from using Zoom and the FBI has observed instances of people invading school sessions on the service, organizations need to prevent employees from sharing links to team meetings publicly. Alternatively, organizations can also try other video conferencing services that boast better security features.

The coronavirus epidemic is weighing heavily on the security sector with a record spike in COVID-19 themed cyberattacks. In fact, the healthcare industry on the frontlines of the epidemic is facing a record surge of cyber attacks. As per reports, hackers targeted U.K. based Hammersmith Medicines Research, the test center preparing to perform medical trials on prospective COVID-19 vaccines. The test center was hit by a cyber attack on March 14 when hackers attempted to breach the system. Reports indicate some data was stolen and posted online for ransom. Additionally, Security researchers from Nokia's Threat Intelligence Lab uncovered a powerful malware disguised as a "coronavirus map" application that infects Windows computers and is disguised as software from John Hopkins University.

Big Picture: The coronavirus epidemic has become the new attack vector for cyber criminals who have jumped on the opportunity. The Coronavirus map app is one such malicious app- secretly stealing credit card numbers, browser history, cookies, usernames and passwords from the browser's cache without users noticing such actions.

Our Take: Cybercriminals are exploiting global concerns around COVID-19, targeting people and organizations on the front lines of the pandemic. The record increase in hacking attempts has prompted cybersecurity professionals to step up the plate and form a response group called Cyber Volunteers 19.

Both individuals and corporations need to put more guardrails against these cyber threats and ensure appropriate security frameworks and policies to keep threat actors at bay.

On Tuesday, the Washington state legislature passed a bill into law to regulate the use of facial recognition by government agencies. As per the new law, facial recognition technologies need to be regularly tested for fairness and accuracy and can only be used under warrant. However, another bill to regulate the commercial use of facial recognition was tabled but not passed.

Big Picture: Washington tech giant Microsoft that has been lobbying for regulations around the use of facial recognition tech welcomed the move. Microsoft President Brad Smith hailed the new law as a significant breakthrough and an early and important model to serve the public interest without impacting people's fundamental rights.

Our Take: Facial recognition offers many benefits but also poses a serious threat to privacy and security. Ethical use of such technologies should be enforced through legislation and should apply to both public and private entities. The scope and purposes of facial recognition tech should also be reviewed regularly to prevent misuse.

AlphaGo Developer Nabs ACM Prize!

What did you think of this weeks tech news roundup? Let us know on Twitter, Facebook, and LinkedIn. Wed love to hear from you!

Read the original:
Latest Tech News This Week: Zoom Hit With Security Woes, Cyber Attacks on Healthcare Ramp Up|Weekly Rundown - Toolbox

Quant Investing: Welcome to the Revolution – Investment U

Investment Opportunities

By Nicholas Vardy

Originally posted April 2, 2020 on Liberty Through Wealth

Editors Note: We know things are changing rapidly as the number of COVID-19 cases increases and Mr. Market reacts. Our strategists are here for you to keep you up to date with all the information that you need to make smart investment choices. Take a look at Nicholas Vardys latest video update here: How to Manage Financial Risks During Pandemic.

Christina Grieves, Senior Managing Editor

Machines are taking over Wall Street.

Today, the biggest quant investing firms, like Renaissance Technologies, Two Sigma Investments and D.E. Shaw, manage tens of billions of dollars.

In total, quant-focused hedge funds manage almost $1 trillion in assets.

The rise of quant investing has Wall Streets army of human financial analysts rightfully worried about their jobs.

Picture a room full of financial analysts spending their days (and nights) sifting through company balance sheets, income statements, news stories and regulatory filings. All this to unearth a yet undiscovered investment opportunity.

Compare that image with lightning-fast computers sifting through millions of patent databases, academic journals and social media posts every single day.

We humans dont have a prayer.

But thanks to the democratization of computing power, the rise of quant investing is terrific news for you, the small investor.

When I started my investment career in the 1990s, quant investing was about identifying momentum in stocks, riding trending prices like a surfer rides a wave.

I developed my first quant-based trading system in 1994 using a now-defunct computer program named Windows on Wall Street.

Today, cutting-edge quant hedge funds use computers and algorithms unimaginable two decades ago.

This kind of trading requires more the skills of astrophysics PhDs than those of traditional financial analysts.

Over the past decade, this quant-driven approach to trading has exploded. Thats partially because any edge stemming from fundamental research has all but disappeared.

Its said that in 1815, Nathan Mayer Rothschild used carrier pigeons to learn about the outcome of the Battle of Waterloo ahead of other investors. That edge made him a fortune.

George Soros attributed his early success investing in European companies in the 1960s to being a one-eyed king among the blind.

Today, financial traders have more information on their smartphones than the worlds top hedge funds did 20 years ago.

Being a one-eyed king just doesnt cut it anymore.

Trading is not the only arena in which humans have lost out to machines.

The battle between man and machine had a watershed moment in 1997. Thats when Garry Kasparov, the worlds top-ranked chess player at the time, lost to IBM supercomputer Deep Blue.

There have been many other such moments since. In 2013, IBMs Watson beat two Jeopardy champions. In 2017, Googles AlphaGo computer defeated the worlds top player in Go, humankinds most complicated board game.

In his book Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins, Kasparov concedes that human players have no chance against todays powerful computers.

The reason?

Computers follow the rules without fail. They can process vast swaths of information at the speed of light. They dont get tired. They are never off their game.

A human chess player has to screw up only once to lose a match.

The same applies to human decision making versus quant algorithms in the world of investing.

Fatigue, emotion and limited capacity to process information are all enemies to traders. In contrast, quant algorithms never tire, never get exasperated, and are immune to both a traders and Mr. Markets mood swings.

Thats why investing against machines is like playing chess against a computer.

Yes, you may beat the computer occasionally. But in the long term, its a losers game.

Quant investing may scare you.

It shouldnt.

As with all disruptive technologies, quant investing democratizes investing in unimaginable ways.

Twenty years ago, only the worlds top hedge funds had the computer power to generate consistent market-beating returns.

Today, I have access to computer programs that can develop similar quant strategies without the need for an army of PhDs. I can harness these computers to develop a wide range of quant strategies.

These strategies can unearth value, growth and high-quality companies They can focus on short-, medium- and long-term trading strategies They can identify technical factors like relative strength, momentum and reversion to the mean.

I have spent the last six months developing just such quant strategies. Specifically, I developed a short-term swing trading system.

Swing trading

Look for more information on my new trading service, Oxford Swing Trader, in the weeks ahead.

Good investing,

Nicholas

Stay informed with the latest news from Nicholas, including video updates where he shares his views on the current state of the markets. Simply like his Facebook page and follow @NickVardy on Twitter.

An accomplished investment advisor and widely recognized expert on quantitative investing, global investing and exchange-traded funds, Nicholas has been a regular commentator on CNN International and Fox Business Network. He has also been cited inTheWall Street Journal,Financial Times,Newsweek, Fox Business News, CBS, MarketWatch, Yahoo Finance and MSN Money Central. Nicholas holds a bachelors and a masters from Stanford University and a J.D. from Harvard Law School. Its no wonder his groundbreaking content is published regularly in the free daily e-letterLiberty Through Wealth.

Read more here:
Quant Investing: Welcome to the Revolution - Investment U

AI on steroids: Much bigger neural nets to come with new hardware, say Bengio, Hinton, and LeCun – ZDNet

Geoffrey Hinton, center. talks about what future deep learning neural nets may look like, flanked by Yann LeCun of Facebook, left, and Yoshua Bengio of Montreal's MILA institute for AI, during a press conference at the 34th annual AAAI conference on artificial intelligence.

The rise of dedicated chips and systems for artificial intelligence will "make possible a lot of stuff that's not possible now," said Geoffrey Hinton, the University of Toronto professor who is one of the godfathers of the "deep learning" school of artificial intelligence, during a press conference on Monday.

Hinton joined his compatriots, Yann LeCun of Facebook and Yoshua Bengio of Canada's MILA institute, fellow deep learning pioneers, in an upstairs meeting room of the Hilton Hotel on the sidelines of the 34th annual conference on AI by the Association for the Advancement of Artificial Intelligence. They spoke for 45 minutes to a small group of reporters on a variety of topics, including AI ethics and what "common sense" might mean in AI. The night before, all three had presented their latest research directions.

Regarding hardware, Hinton went into an extended explanation of the technical aspects that constrain today's neural networks. The weights of a neural network, for example, have to be used hundreds of times, he pointed out, making frequent, temporary updates to the weights. He said the fact graphics processing units (GPUs) have limited memory for weights and have to constantly store and retrieve them in external DRAM is a limiting factor.

Much larger on-chip memory capacity "will help with things like Transformer, for soft attention," said Hinton, referring to the wildly popular autoregressive neural network developed at Google in 2017. Transformers, which use "key/value" pairs to store and retrieve from memory, could be much larger with a chip that has substantial embedded memory, he said.

Also: Deep learning godfathers Bengio, Hinton, and LeCun say the field can fix its flaws

LeCun and Bengio agreed, with LeCun noting that GPUs "force us to do batching," where data samples are combined in groups as they pass through a neural network, "which isn't efficient." Another problem is that GPUs assume neural networks are built out of matrix products, which forces constraints on the kind of transformations scientists can build into such networks.

"Also sparse computation, which isn't convenient to run on GPUs ...," said Bengio, referring to instances where most of the data, such as pixel values, may be empty, with only a few significant bits to work on.

LeCun predicted that new hardware would lead to "much bigger neural nets with sparse activations," and he and Bengio both emphasized that there is an interest in doing the same amount of work with less energy. LeCun defended AI against claims it is an energy hog, however. "This idea that AI is eating the atmosphere, it's just wrong," he said. "I mean, just compare it to something like raising cows," he continued. "The energy consumed by Facebook annually for each Facebook user is 1,500-watt hours," he said. Not a lot, in his view, compared to other energy-hogging technologies.

The biggest problem with hardware, mused LeCun, is that on the training side of things, it is a duopoly between Nvidia, for GPUs, and Google's Tensor Processing Unit (TPU), repeating a point he had made last year at the International Solid-State Circuits Conference.

Even more interesting than hardware for training, LeCun said, is hardware design for inference. "You now want to run on an augmented reality device, say, and you need a chip that consumes milliwatts of power and runs for an entire day on a battery." LeCun reiterated a statement made a year ago that Facebook is working on various internal hardware projects for AI, including for inference, but he declined to go into details.

Also: Facebook's Yann LeCun says 'internal activity' proceeds on AI chips

Today's neural networks are tiny, Hinton noted, with really big ones having perhaps just ten billion parameters. Progress on hardware might advance AI just by making much bigger nets with an order of magnitude more weights. "There are one trillion synapses in a cubic centimeter of the brain," he noted. "If there is such a thing as General AI, it would probably require one trillion synapses."

As for what "common sense" might look like in a machine, nobody really knows, Bengio maintained. Hinton complained people keep moving the goalposts, such as with natural language models. "We finally did it, and then they said it's not really understanding, and can you figure out the pronoun references in the Winograd Schema Challenge," a question-answering task used a computer language benchmark. "Now we are doing pretty well at that, and they want to find something else" to judge machine learning he said. "It's like trying to argue with a religious person, there's no way you can win."

But, one reporter asked, what's concerning to the public is not so much the lack of evidence of human understanding, but evidence that machines are operating in alien ways, such as the "adversarial examples." Hinton replied that adversarial examples show the behavior of classifiers is not quite right yet. "Although we are able to classify things correctly, the networks are doing it absolutely for the wrong reasons," he said. "Adversarial examples show us that machines are doing things in ways that are different from us."

LeCun pointed out animals can also be fooled just like machines. "You can design a test so it would be right for a human, but it wouldn't work for this other creature," he mused. Hinton concurred, observing "house cats have this same limitation."

Also: LeCun, Hinton, Bengio: AI conspirators awarded prestigious Turing prize

"You have a cat lying on a staircase, and if you bounce a soccer ball down the stairs toward a care, the cat will just sort of watch the ball bounce until it hits the cat in the face."

Another thing that could prove a giant advance for AI, all three agreed, is robotics. "We are at the beginning of a revolution," said Hinton. "It's going to be a big deal" to many applications such as vision. Rather than analyzing the entire contents of a static image or video frame, a robot creates a new "model of perception," he said.

"You're going to look somewhere, and then look somewhere else, so it now becomes a sequential process that involves acts of attention," he explained.

Hinton predicted last year's work by OpenAI in manipulating a Rubik's cube was a watershed moment for robotics, or, rather, an "AlphaGo moment," as he put it, referring to DeepMind's Go computer.

LeCun concurred, saying that Facebook is running AI projects not because Facebook has an extreme interest in robotics, per se, but because it is seen as an "important substrate for advances in AI research."

It wasn't all gee-whiz, the three scientists offered skepticism on some points. While most research in deep learning that matters is done out in the open, some companies boast of AI while keeping the details a secret.

"It's hidden because it's making it seem important," said Bengio, when in fact, a lot of work in the depths of companies may not be groundbreaking. "Sometimes companies make it look a lot more sophisticated than it is."

Bengio continued his role among the three of being much more outspoken on societal issues of AI, such as building ethical systems.

When LeCun was asked about the use of facial recognition algorithms, he noted technology can be used for good and bad purposes, and that a lot depends on the democratic institutions of society. But Bengio pushed back slightly, saying, "What Yann is saying is clearly true, but prominent scientists have a responsibility to speak out." LeCun mused that it's not the job of science to "decide for society," prompting Bengio to respond, "I'm not saying decide, I'm saying we should weigh in because governments in some countries are open to that involvement."

Hinton, who frequently punctuates things with a humorous aside, noted toward the end of the gathering his biggest mistake with respect to Nvidia. "I made a big mistake back in 2009 with Nvidia," he said. "In 2009, I told an audience of 1,000 grad students they should go and buy Nvidia GPUs to speed up their neural nets. I called Nvidia and said I just recommended your GPUs to 1,000 researchers, can you give me a free one, and they said, No.

"What I should have done, if I was really smart, was take all my savings and put it into Nvidia stock. The stock was at $20 then, now it's, like, $250."

View original post here:
AI on steroids: Much bigger neural nets to come with new hardware, say Bengio, Hinton, and LeCun - ZDNet