Archive for the ‘Alphazero’ Category

Top 5 stories of the week: DeepMind and OpenAI advancements, Intels plan for GPUs, Microsofts zero-day flaws – VentureBeat

Learn how your company can create applications to automate tasks and generate further efficiencies through low-code/no-code tools on November 9 at the virtual Low-Code/No-Code Summit. Register here.

This week, Googled-owned tech lab, DeepMind, unveiled its first AI that is capable of creating its own algorithms to speed up matrix multiplication. Though its taught in high school math, matrix multiplication is actually fundamental to computational tasks and remains a core operation in neural networks.

In the same vein, OpenAI this week announced the release of Whisper its open-source, deep learning model for speech recognition. The company claims the technology already shows promising results transcribing audio in several languages.

Joining the innovation sprint this week, Intel detailed a plan to make developers lives a bit easier, with a goal to make it possible to build an application once that can run on any operating system. Historically, this was a goal of the Java programming language, but even today the process is not uniform across the computing landscape something Intel hopes to change.

On the security front, enterprise leaders had several new announcements to take note of this week, including the zero-day flaw exploit in Microsofts Exchange Server. The company confirmed that a suspected state-sponsored threat actor was able to successfully exfiltrate data from fewer than 10 organizations using its staple platform.

Low-Code/No-Code Summit

Join todays leading executives at the Low-Code/No-Code Summit virtually on November 9. Register for your free pass today.

While its no secret that attacks like these continue to expand in both volume and intensity the methods for preventing attacks are also evolving. Vulnerability solutions provider Tenable is one that has evolved to change its main focus, too. This week, the company announced its shifting its focus from vulnerability management to attack surface management and released a new tool for enterprises with that focus.

Heres more from our top five tech stories of the week:

AlphaTensor, according to a DeepMind blog post, builds upon AlphaZero, an agent that has shown superhuman performance on board games like chess and Go. This new work takes the AlphaZero journey further, moving from playing games to tackling unsolved mathematical problems.

This research delves into how AI could be used to improve computer science itself.

The ability to build once and run anywhere, however, is not uniform across the computing landscape in 2022. Its a situation that Intel is looking to help change, at least when it comes to accelerated computing and the use of GPUs.

Intel is contributing heavily to the open-source SYCL specification (SYCL is pronounced like sickle) that aims to do for GPU and accelerated computing what Java did decades ago for application development.

Exposure management gives security teams a broader view of the attack surface, offering the ability to conduct attack path analysis to analyze attack paths from externally identified points to internal assets. It also allows organizations to create a centralized inventory of all IT, cloud, Active Directory and web assets.

While information is limited, Microsoft has confirmed in a blog post that these exploits have been used by a suspected state-sponsored threat actor to target fewer than 10 organizations and successfully exfiltrate data.

Developers and researchers who have experimented with Whisper are also impressed with what the model can do. However, what is perhaps equally important is what Whispers release tells us about the shifting culture in artificial intelligence (AI) research and the kind of applications we can expect in the future.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Go here to read the rest:
Top 5 stories of the week: DeepMind and OpenAI advancements, Intels plan for GPUs, Microsofts zero-day flaws - VentureBeat

Taxing times (open thread) The Poll Bludger – The Poll Bludger

A new poll finds respondents nearly twice as likely to support than oppose repealing stage three tax cuts.

The Australia Institute has a poll out which offers the interesting finding that 41% favour the repeal of the stage three tax cuts, with only 22% on board and the remainder unsure. Forty-six per cent understood the cuts to most favour high income earners, compared with 18% for middle income earners and 8% for low income earners. Asked whether adapting economic policy to suit the changing circumstances even if that means breaking an election promise rated higher than keeping an election promise regardless of how economic circumstances have changed, 61% favoured the former and 27% the latter. The poll was conducted September 6 to 9 from a sample of 1409.

The Guardian reports on the fortnightly poll from Essential Research, which continues to hold off from voting intention and does not include leadership ratings on this occasion, and is mostly devoted to questions on incidental political relevance regarding the Optus security breach. Fifty-one per cent would support stronger curbs on information collected by private companies and 47% expressed concern about governments collecting their personal information. The full report should be along later today.

UPDATE: Full Essential Research report here.

William Bowe is a Perth-based election analyst and occasional teacher of political science. His blog, The Poll Bludger, has existed in one form or another since 2004, and is one of the most heavily trafficked websites on Australian politics.View all posts by William Bowe

View post:
Taxing times (open thread) The Poll Bludger - The Poll Bludger

AlphaGo Zero Explained In One Diagram | by David Foster – Medium

The AlphaGo Zero Cheat Sheet (high-res link below)

Download the AlphaGo Zero cheat sheet

Recently Google DeepMind announced AlphaGo Zero an extraordinary achievement that has shown how it is possible to train an agent to a superhuman level in the highly complex and challenging domain of Go, tabula rasa that is, from a blank slate, with no human expert play used as training data.

It thrashed the previous reincarnation 1000, using only 4TPUs instead of 48TPUs and a single neural network instead of two.

The paper that the cheat sheet is based on was published in Nature and is available here. I highly recommend you read it, as it explains in detail how deep learning and Monte Carlo Tree Search are combined to produce a powerful reinforcement learning algorithm.

Hopefully you find the AlphaGo Zero cheat sheet useful let me know if you find any typos or have questions about anything in the document.

If you would like to learn more about how our company, Applied Data Science develops innovative data science solutions for businesses, feel free to get in touch through our website or directly through LinkedIn.

and if you like this, feel free to leave a few hearty claps 🙂

Applied Data Science is a London based consultancy that implements end-to-end data science solutions for businesses, delivering measurable value. If youre looking to do more with your data, lets talk.

Read more here:
AlphaGo Zero Explained In One Diagram | by David Foster - Medium

A chess scandal brings fresh attention to computers role in the game – The Record by Recorded Future

When the worlds top-rated chess player, Magnus Carlsen, lost in the third round of the Sinquefield Cup earlier this month, it rocked the elite chess world.

The tournament was held in St. Louis, and Carlsen, one of the biggest names in chess since Bobby Fischer, faced 19-year-old Hans Niemann, a confident, shaggy-haired American. Over the course of 57 moves, Niemann whittled his Norwegian opponent down to just his king and a bishop, before the five-time world champion resigned the match.

But what followed was even more shocking: Carlsen quit the whole tournament, then released a statement this week outright accusing Niemann of cheating. I had the impression that he wasnt tense or even fully concentrating on the game in critical positions, while outplaying me as black in a way I think only a handful of players can do, he wrote.

Neither Carlsen nor Niemanns critics have brought forth actual evidence of cheating, though Niemann did admit he had cheated at online chess in the past. Tongues started wagging soon after: some chess players and commentators accused Niemann of stealing Carlsens opening moves, of getting outside help.

Others accused him of using a chess engine a computer program built not just to beat humans at chess, but destroy them.

I wouldnt quite say that its like a car driving, you know, compared to a person running, but its not that far off, said former world champion Susan Polgar, about chess engines.

The worlds most famous chess engine is called Stockfish, a free, open-source program that helps train the masses. It analyzes games, then generates the strongest possible moves. And there are dozens of different engines, with all sorts of names: Houdini, Leela Chess Zero, AlphaZero. (Carlsen even has a chess engine, called Sesse, modeled after his own game.)

How a player could use an engine to cheat online is obvious: open the chess match on one tab while plugging your opponents moves into Stockfish on the side.

But Niemann and Carlsen played in-person, sitting across from each other. Is it even possible to cheat that way? On this weeks episode of the Click Here podcast, Polgar explained that its not unprecedented.

It sadly does happen from time to time, Polgar said. And the most famous case was at the Chess Olympiad in 2010, when the French team colluded.

The 2010 Chess Olympiad took place in Russia. Months after the tournament, it came out that three French teammates had devised an elaborate system to cheat at in-person chess. Polgar was there, none the wiser.

It obviously requires multiple people, Polgar said.

The first teammate was remote, watching the tournament live stream and typing each of the opponents moves into a free, open-source chess engine called Firebird. Hed then text the second teammate, who was at the match, with the suggested moves.

The third teammate the actual player watched for his teammates predetermined signals. They worked out a way to communicate not using obvious hand signs or facial cues, but by where in the room the second guy was standing.

Polgar said she was obviously shocked and disappointed when news of the 2010 cheating broke. But this time around, the accusations against Niemann have yet to convince Polgar. She analyzed the Sinquefield Cup match, and based on the technical moves of the game itself, I cannot say, or even suspect, cheating. (After a TSA-style security check in the following match, tournament organizers found no evidence Niemann cheated; he would eventually finish sixth in the tournament.)

The 2010 Olympiad was a three-man operation. But this August, a month before the Sinquefield Cup, a British computer programmer laid out an elaborate scheme to cheat at in-person chess solo.

I definitely wouldnt call myself a good chess player, said James Stanley, who published the guide on his blog, Incoherency.

He started by loading the chess engine Stockfish onto a tiny computer, which he could fit in his pocket.

Connected to the computer are some cables that run down my trouser legs, he told The Record. So theres a hole in the inside of my cargo pocket. The cables run through the hole, down the trouser legs, into these 3D-printed inserts that go in my shoes.

Those inserts have buttons for his toes buttons that allow him to tap the opponents chess moves, morse code-style, and send them to the computer loaded up with Stockfish in his pocket.

Stockfish would work out what response it wants to play, and the computer would then send the vibrations to my feet down the cables, Stanley said.

He interprets the vibrations, plays the suggested move, and then we just repeat every turn.

Stanley, a former cybersecurity professional, calls his invention Sockfish. His friend, whom he played against in a pub, was none the wiser.

I told him I was planning to use the shoes to find a player whos plausibly good enough to win the world championship, have him use the shoes, win the world championship, win the money but as a joke, obviously, Stanley said. So its quite funny to me that theres now a massive controversy at the Sinquefield Cup where someone is accused of having cheated.

That massive controversy has not died down. Carlsen and Niemann played each other again last week, albeit virtually. In the Julius Baer Generation Cup, an online tournament, Carlsen made just one move before shutting off his camera and resigning the match. He ultimately won the tournament.

Unfortunately, at this time I am limited in what I can say without explicit permission from Niemann to speak openly, he wrote in a statement this week. So far I have only been able to speak with my actions, and those actions have stated clearly that I am not willing to play chess with Niemann. I hope that the truth on this matter comes out, whatever it may be.

Listen to this story and others like it on Click Here.

Will Jarvis is a producer for the Click Here podcast. Before joining Click Here and The Record, he produced podcasts and worked on national news magazines at National Public Radio, including Weekend Edition, All Things Considered, The National Conversation and Pop Culture Happy Hour. His work has also been published in The Chronicle of Higher Education, Ad Age and ESPN.

Read the original:
A chess scandal brings fresh attention to computers role in the game - The Record by Recorded Future

Meta AI Boss: current AI methods will never lead to true intelligence – Gizchina.com

Meta is one of the leading companies in AI development globally. However, the company appears to not have confidence in the current AI methods. According toYann LeCun, chief AI scientist at Meta, there needs to be an improvement for true intelligence. LeCun claims that the most current AI methods will never lead to true intelligence. His research on many of the most successful deep learning fields today method is skeptical.

The Turing Award winner said that the pursuit of his peers is necessary, but not enough.These include research on large language models such as Transformer-based GPT-3.As LeCun describes it, Transformer proponents believe: We tokenize everything and train giant models to make discrete predictions, and thats where AI stands out.

Theyre not wrong. In that sense, this could be an important part of future intelligent systems, but I think its missing the necessary parts, explained LeCun. LeCun perfected the use of convolutional neural networks, which has been incredibly productive in deep learning projects.

LeCun also seesflaws and limitations in many other highly successful areas of the discipline.Reinforcement learning is never enough, he insists.Researchers like DeepMinds David Silver, although they developed the AlphaZero program and mastered chess and Go, focused on very action-oriented programs, while LeCun observed. He claims that most of our learning is not done by taking actual action, but by observation.

LeCun, 62, has a strong sense of urgency to confront the dead ends he believes many may be heading. He will also try to steer his field in the direction he thinks it should be heading. Weve seen a lot of claims about what we should be doing to push AI to human-level intelligence. I think some of those ideas are wrong, LeCun said. Our intelligent machines arent even at the level of cat intelligence. So why do we not start here?

LeCun believes that not only academia but also the AI industry needs profound reflection. Self-driving car groups, such as startups like Wayve, think they can learn just about anything by throwing data at large neural networks,which seems a little too optimistic, he said.

You know, I think its entirely possible for us to have Level 5 autonomous vehicles without common sense, but you have to work on the design, LeCun said. He believes that this over-engineered self-driving technology will like all computer vision programs obsoleted by deep learning, they become fragile. At the end of the day, there will be a more satisfying and possibly better solution that involves systems that better understand how the world works, he said.

LeCun hopes to prompt a rethinking of the fundamental concepts about AI, saying: You have to take a step back and say, Okay, we built the ladder, but we want to go to the moon, and this ladder cant possibly get us there. I would say its like making a rocket, I cant tell you the details of how we make a rocket, but I can give the basics.

According to LeCun, AI systems need to be able to reason, and the process he advocates is to minimize certain underlying variables. This enables the system to plan and reason. Furthermore, LeCun argues that the probabilistic framework should be abandoned. This is because it is difficult to deal with when we want to do things like capture dependencies between high-dimensional continuous variables. LeCun also advocates forgoing generative models. If not, the system will have to devote too many resources to predicting things that are hard to predict. Ultimately, the system will consume too many resources.

In a recent interview with business technology media ZDNet, LeCun reveals some information from a paper which he wrote regarding the exploration of the future of AI. In this paper, LeCun disclosed his research direction for the next ten years.Currently GPT-3, Transformer advocates believe that as long as everything is tokenized and then huge models are trained to make discrete predictions, AI will somehow emerge.But he believes that this is only one of the components of future intelligent systems, but not a key necessary part.

And even reinforcement learning cant solve the above problem, he explained. Although they are good chess players, they are still only programs that focus on actions.LeCun also adds that many people claim to advance AI in some way, but these ideas mislead us. He further believes that the common sense of current intelligent machines is not even as good as that of a cat. This is the origin of the low development of AI he believes. The AI methods have serious flaws.

As a result, LeCun confessed that he had given up the study of using the generative network to predict the next frame of the video from this frame

It was a complete failure, he adds.

LeCun summed up the reasons for the failure, the models based on probability theory that limited him. At the same time, he denounced those who believed that probability theory was superstitious. They believe that probability theory is the only framework for explaining machine learning, but in fact, a world model built with 100% probability is difficult to achieve.At present, he has not been able to solve this underlying problem very well. However, LeCun hopes torethink and draw an analogy.

It is worth mentioning that LeCun talked bluntlyabout his critics in the interview. He specifically took a jab atGary Marcus, a professor at New York University who he claims has never made any contribution to AI.

See the rest here:
Meta AI Boss: current AI methods will never lead to true intelligence - Gizchina.com