Archive for February, 2021

Quick Scripts AlphaZero

The AlphaZero.Scripts module provides a quick way to execute common tasks with a single line of code. For example, starting or resuming a training session for the connect-four example becomes as simple as executing the following command line:

The first argument of every script specifies what experiment to load. This can be specified as an object of type Experiment or as a string from keys(Examples.experiment).

Perform some sanity checks regarding the compliance of a game with the AlphaZero.jl Game Interface.

Launch a training session where hyperparameters are altered so that training finishes as quickly as possible.

This is useful to ensure the absence of runtime errors before a real training session is started.

Start or resume a training session.

The optional keyword arguments are passed directly to the Session constructor.

Play an interactive game against the current agent.

Use the interactive explorer to visualize the current agent.

Read more here:
Quick Scripts AlphaZero

How to Kickstart an AI Venture Without Proprietary Data – Medium

AI startups have a chicken & egg problem. Heres how to solve it.

A few years ago, I learned about the billions of dollars banks lose to credit card fraud on an annual basis. Better detection or prediction of fraud would be incredibly valuable. And so I considered the possibility of convincing a bank to share their transactional data in the hope of building a better fraud detection algorithm. The catch, unsurprisingly, was that no major bank is willing to share such data. They feel theyre better off hiring a team of data scientists to work on the problem internally. My startup idea died a quick death.

Despite the tremendous innovation and entrepreneurial opportunities around AI, breaking into AI can be a daunting task for entrepreneurs as they face a chicken-and-egg problem before they even begin, something existing companies are less likely to contend with. I believe specific strategies can help entrepreneurs overcome this challenge and create successful AI-driven ventures.

Todays AI systems need to be trained on large datasets, which can pose a challenge for entrepreneurs. Established companies with a sizable customer base already have a stream of data from which they can train AI systems, build new products and enhance existing ones, generate additional data, and rinse and repeat (for example, Google Maps has over 1B monthly active users and over 20 Petabytes of data). But for entrepreneurs, the need for data poses a chicken-and-egg problem because their company hasnt yet been built, they dont have data, which means they cant create an AI product as easily.

Additionally, data is not only necessary to get started with AI, it is actually key to AI performance. Research has shown that while algorithms matter, data matters more. Among modern machine learning methods, the differences in performance between various algorithms are relatively small when compared to the performance differences between the same algorithms with more or less data (Banko and Brill 2001).

There are several strategies that can help entrepreneurs navigate this chicken-and-egg problem and access the data they need to break into the AI space.

Research has shown that while algorithms matter, data matters more.

While data does need to come before an AI product, data does not need to come before all products. Entrepreneurs can begin by creating a service that is not AI-based, but that solves customer problems and that generates data in the process. This data can later be used to train an AI system that enhances the existing service or creates a related service.

For example, Facebook didnt use AI in its early days, but it still provided a social networking platform that customers wanted to join. In the process, Facebook generated a large amount of data which was in turn used to train AI systems that helped personalize the newsfeed and also made it possible to run extremely targeted ads. Despite not being an AI-driven service at the outset, Facebook has become a heavy user of AI.

Similarly, the InsurTech startup Lemonade initially didnt have data to build sophisticated AI capabilities on day one. However, over time, Lemonade has built AI tools to create quotes, process claims, and detect fraud. Today, their AI system handles the first notice of loss for 96% of claims, and manages the full claim resolution without any human involvement in a third of the cases. These AI capabilities have been built using the data generated over many years of operations.

2. Partner with a non-tech company that has a proprietary dataset

Entrepreneurs can partner with a company or organization that has a proprietary dataset but lacks in-house AI expertise. This approach is particularly useful in contexts where it would be very difficult to create a product that in turn generates the kind of data your AI application needs, such as medical data about patient tests and diagnoses. In this case, you could partner with a hospital or insurance company in order to obtain anonymized data.

A related point is that training data for your AI product can come from a potential customer. While this is harder in regulated industries like healthcare and finance, customers in other industries like manufacturing may be more open to it. All you might need to offer in return is exclusive access to the AI product for a few months or early access to future product features.

A pitfall of this approach is that potential partners may prefer working with established companies rather than smaller players who may be less known and trusted (especially in a post- GDPR and Cambridge Analytica world). So business development will be tricky but this strategy is nonetheless feasible especially when well-known tech companies are not already chasing after your desired partner.

Entrepreneurs who are part of a family business may already have access to a potentially large amount of data from their existing business. Thats a great option too.

3. Crowdsource the (labeled) data you need

Depending on the kind of data needed, entrepreneurs can obtain data through crowdsourcing. When data is available but is not well labeled (e.g. images on the Internet), crowdsourcing can be a particularly well-suited method for obtaining this data, as labeling is a task that lends itself well to being completed quickly by a large number of individuals on crowdsourcing platforms. Platforms such as Amazon Mechanical Turk and Scale.ai are frequently used to help generate labeled training data.

For example, consider Googles use of Captchas. While they serve an important security purpose, Google simultaneously uses them as a crowdsourced image labeling system. Every day millions of users are part of the Google analytics pre-processing team which are validating machine learning algorithms- for free.

Some products have workflows which allow customers to help label new data in the course of using the product. In fact, the entire subfield of Active Learning is focused on how to interactively query users to better label new data points. For example, consider a cybersecurity product that generates alerts about risks and a workflow in which an Ops engineer resolves those alerts thereby generating new labeled data. Similarly, product recommendation services like Pandora use upvotes and downvotes to validate recommendation accuracy. In both these cases, you can start with an MVP that continually improves over time as customers provide feedback.

4. Make use of public data

Before you conclude that the data you need is not available, look harder. There is more publicly available data than you might imagine. There are even data marketplaces emerging. While publicly available data (and therefore the resulting product) might be less defensible, you can build defensibility through other service/product innovations such as creating an exceptional user experience or combining offline and digital data at scale as Zillow does (the company uses offline public municipal data at scale as part of their innovative online real estate application). One could also combine publicly available data with some proprietary data, which could be generated over time or obtained through partnerships, crowdsourcing, etc.

The Canadian company BlueDot uses a variety of data sources, including publicly available data, in order to detect outbreaks of emerging diseases before they are officially reported as well as predict where an outbreak will spread to next. BlueDot uses statements from official public health organizations, digital media, global airline ticketing data, livestock health reports, and population demographics, among other data sources. The company detected the COVID-19 outbreak on December 30th, 2019, nine days before the WHO reported on it.

There is more publicly available data than you might imagine. There are even data marketplaces emerging.

5. Rethink the need for data

It is true that most of the practical AI in the business world is based on Machine Learning. And most of that ML is supervised ML (which requires large labeled training datasets). But many problems can be solved with other AI techniques that are not reliant on data, such as reinforcement learning or expert systems.

Reinforcement learning is an ML approach in which algorithms learn by testing various actions or strategies and observing the rewards from these actions. Essentially, reinforcement learning uses experimentation to compensate for a lack of labeled training data. The original iteration of Googles Go playing software, Alpha Go, was trained on a large training dataset, but the next iteration, AlphaZero, was based on reinforcement learning and had zero training data. Yet AlphaZero beat AlphaGo (which itself beat Lee Sedol, Gos world champion).

Reinforcement learning is widely used in online personalization. Online companies frequently test and evaluate multiple website designs, product descriptions, product images, and pricing. Reinforcement learning algorithms explore new design and marketing choices and rapidly learn how to personalize user experience based on their responses.

Another approach is to use expert systems, which are simple rule-based systems that often codify rules that experts use routinely. While expert systems rarely beat well-trained ML systems for complex tasks such as medical diagnosis or image recognition, they can help break the chicken-and-egg problem and help you get started. For example, the virtual healthcare company Curai used knowledge from expert systems to create clinical vignettes, and then used these vignettes as training data for ML models (alongside data from electronic health records and other sources).

To be clear, not every intelligence problem can be cast as a reinforcement learning problem or tackled through an expert systems approach. But these are worth considering when the lack of training data has halted the development of an interesting ML product.

Entrepreneurs are most likely to develop a consistent stream of proprietary data if they start by offering a service that has value without AI and that generates data, and then use this to train an AI system. However, this strategy does require time and may not be the best fit for all situations. Depending on the nature of the startup and the kind of data that is needed, it may work better to partner with a non-tech company that has a proprietary dataset, crowdsource (labeled) data, or make use of public data. Alternatively, entrepreneurs can rethink the need for data entirely and consider taking a reinforcement learning or expert systems approach.

Read this article:
How to Kickstart an AI Venture Without Proprietary Data - Medium

Street Fighter V: What to Expect After the Winter Update | CBR – CBR – Comic Book Resources

As Street Fighter V powers up for its final season, here's what players can expect to see after the winter update.

It's been a long ride since Capcom'sStreet Fighter V was released in 2016. The game has overcome many challenges to get to this point, andit's become better for it. Onlinehas still been a struggle, as Capcom hasn't performed well on issues of netcode, but in terms of the other parts of the game,it's blown the initial, barebones release out of the water.

Players have debated many times what characters and content should be among the last group of DLC, but the developers' winter update reveals tons of new content and one big mystery.

RELATED:Street Fighter V: 5 Mods to Enhance Your Game

Capcom's winter updateshows early footage of a partially completed Street Fighter V Season 5 character, Rose, including a rough model and some gameplay. Many of her familiar moves are back or have been remixed in some way, such as her old Soul Satellite, and she has new ones such as Soul Punish to utilize too. Rose's story will look to expand upon her role as newcomer Menat's master, though it is still unknown what her release date is. Rose's segment also gives viewers a behind the scenes look at how Capcom does motion capture for characters.

In the update video, Capcom also has news on Dan Hibiki, who will be released Feb. 22. Dan will work with a special set of V-Skills that are just his style and will change up how players approach the game. Both of his V-Skills are taunt cancels, which enable him to cancel regular attacks into other attacks or special moves. Dan also has a one bar V-Trigger move, where he performs the Haoh Gadoken, and another where his fireballs and uppercuts are powered up.

RELATED: Five Characters That Should Return for Street Fighter VI

One of the bigger changes coming to the game is the new mechanic known as V-Shift. Functioning similarly to dodges or rolls from other fighting games, this new mechanic allows Street Fighter Vplayers to sacrifice some of their Trigger bar to escape from opponent pressure and quickly dash backwards. Doing so correctly will bathe the player character in blue light.

It will also allow them to performthe V-Shift Break. This works as a counter, asthe character dashes forward and performs a forward moving attack to knock down foes. This mechanic should really change up gameplay when introduced.

Related:RetroMania Wrestling: Trailer, Plot, Release Date & News to Know

Meanwhile, the new stage, theMarina of Fortune, is a recreation of Rose's old stage from theStreet Fighter Alpha/Zero games, which should bring back some nostalgia for long time fans. The stage is set on a port near the bay, with ships in the background and fighters stationed on concrete near the docks.

While fans are still mostly in the dark about the characters of Oro, Akira and the mystery final character, there is a confirmed bonus character coming to Street Fighter V. This bonus character, Eleven,is a narrative predecessor toStreet Fighter III's Twelve, but has much different gameplay. Purchasing the Premium Pass for Season5 will give players access to Eleven on Feb. 22, the same day Dan Hibiki is released. Eleven will serve as a randomizer character, transforming into someone on the roster the player owns, while also giving them a randomized V-Skill and Trigger to use.

Developed by Capcom, Street Fighter V is available on PlayStation 4 and PC.

KEEP READING:Warhammer: Vermintide 2 - What to Know About Bardin, the Outcast Engineer

You Can Finally Play Kingdom Hearts on PC, But It's WAY Too Expensive

Read more here:
Street Fighter V: What to Expect After the Winter Update | CBR - CBR - Comic Book Resources

Identifying COVID-19 Therapy Candidates With Machine Learning – Contagionlive.com

Study pinpoints the protein RIPK1 as a promising target for SARS-CoV-2 treatment.

Investigators from the Massachusetts Institute of Technology, in collaboration with Harvard University and ETH Zurich, have developed a machine learning-based approach that can identify therapies that are already on the market that have potential for repurposing to help fight the coronavirus disease 2019 (COVID-19). Results from the study were published in the journal Nature Communications.

As the COVID-19 pandemic continues to surge across the globe and investigators rush to find treatments, the information provided from the approach may have a significant impact.

The target population for the study is the elderly, as the virus impacts them more severely than younger populations. The approach accounts for gene expression changes in lung cells caused by COVID-19 as well as aging. The hope is that this would allow medical experts to find therapies for clinical testing faster.

"Earlier work by the Shivashankar lab showed that if you stimulate cells on a stiffer substrate with a cytokine, similar to what the virus does, they actually turn on different genes," Caroline Uhler, a computational biologist in MIT's Department of Electrical Engineering and Computer Science and the Institute for Data, Systems and Society, and an associate member of the Broad Institute of MIT and Harvard said. "So, that motivated this hypothesis. We need to look at aging together with SARS-CoV-2 -- what are the genes at the intersection of these two pathways?"

The investigators took 3 steps to identify the most promising candidates for repurposing. They first generated a large list of possible candidates using the machine-learning technology and then mapped the genes and proteins involved in the aging process and in a SARS-CoV-2 infection. They then employed algorithms to pinpoint genes that caused cascading effects through the mapped network which narrowed the list of therapies. The overlap caused by the 2 maps is where the team found the precise gene expression network of therapies that would target COVID-19.

The team plans to share the findings with pharmaceutical companies to aid in finding more therapies that can be repurposed for COVID-19. However, they emphasize that any of the therapies identified must undergo clinical testing before they can be approved for use in elderly populations.

"Making new drugs takes forever," Uhler said. "Really, the only expedient option is to repurpose existing drugs."

Link:
Identifying COVID-19 Therapy Candidates With Machine Learning - Contagionlive.com

New Machine Learning Theory Raises Questions About the Very Nature of Science – SciTechDaily

A novel computer algorithm, or set of rules, that accurately predicts the orbits of planets in the solar system could be adapted to better predict and control the behavior of the plasma that fuels fusion facilities designed to harvest on Earth the fusion energy that powers the sun and stars.

The algorithm, devised by a scientist at the U.S. Department of Energys (DOE) Princeton Plasma Physics Laboratory (PPPL), applies machine learning, the form of artificial intelligence (AI) that learns from experience, to develop the predictions. Usually in physics, you make observations, create a theory based on those observations, and then use that theory to predict new observations, said PPPL physicist Hong Qin, author of a paper detailing the concept in Scientific Reports. What Im doing is replacing this process with a type of black box that can produce accurate predictions without using a traditional theory or law.

Qin (pronounced Chin) created a computer program into which he fed data from past observations of the orbits of Mercury, Venus, Earth, Mars, Jupiter, and the dwarf planet Ceres. This program, along with an additional program known as a serving algorithm, then made accurate predictions of the orbits of other planets in the solar system without using Newtons laws of motion and gravitation. Essentially, I bypassed all the fundamental ingredients of physics. I go directly from data to data, Qin said. There is no law of physics in the middle.

PPPL physicist Hong Qin in front of images of planetary orbits and computer code. Credit: Elle Starkman / PPPL Office of Communications

The program does not happen upon accurate predictions by accident. Hong taught the program the underlying principle used by nature to determine the dynamics of any physical system, said Joshua Burby, a physicist at the DOEs Los Alamos National Laboratory who earned his Ph.D. at Princeton under Qins mentorship. The payoff is that the network learns the laws of planetary motion after witnessing very few training examples. In other words, his code really learns the laws of physics.

Machine learning is what makes computer programs like Google Translate possible. Google Translate sifts through a vast amount of information to determine how frequently one word in one language has been translated into a word in the other language. In this way, the program can make an accurate translation without actually learning either language.

The process also appears in philosophical thought experiments like John Searles Chinese Room. In that scenario, a person who did not know Chinese could nevertheless translate a Chinese sentence into English or any other language by using a set of instructions, or rules, that would substitute for understanding. The thought experiment raises questions about what, at root, it means to understand anything at all, and whether understanding implies that something else is happening in the mind besides following rules.

Qin was inspired in part by Oxford philosopher Nick Bostroms philosophical thought experiment that the universe is a computer simulation. If that were true, then fundamental physical laws should reveal that the universe consists of individual chunks of space-time, like pixels in a video game. If we live in a simulation, our world has to be discrete, Qin said. The black box technique Qin devised does not require that physicists believe the simulation conjecture literally, though it builds on this idea to create a program that makes accurate physical predictions.

The resulting pixelated view of the world, akin to what is portrayed in the movie The Matrix, is known as a discrete field theory, which views the universe as composed of individual bits and differs from the theories that people normally create. While scientists typically devise overarching concepts of how the physical world behaves, computers just assemble a collection of data points.

Qin and Eric Palmerduca, a graduate student in the Princeton University Program in Plasma Physics, are now developing ways to use discrete field theories to predict the behavior of particles of plasma in fusion experiments conducted by scientists around the world. The most widely used fusion facilities are doughnut-shaped tokamaks that confine the plasma in powerful magnetic fields.

Fusion, the power that drives the sun and stars, combines light elements in the form of plasma the hot, charged state of matter composed of free electrons and atomic nuclei that represents 99% of the visible universe to generate massive amounts of energy. Scientists are seeking to replicate fusion on Earth for a virtually inexhaustible supply of power to generate electricity.

In a magnetic fusion device, the dynamics of plasmas are complexand multi-scale, and the effective governing laws or computational models for a particular physical process that we are interested in are not always clear, Qin said. In these scenarios, we can apply the machine learning technique that I developed to create a discrete field theory and then apply this discrete field theory to understand and predict new experimental observations.

This process opens up questions about the nature of science itself. Dont scientists want to develop physics theories that explain the world, instead of simply amassing data? Arent theories fundamental to physics and necessary to explain and understand phenomena?

I would argue that the ultimate goal of any scientist is prediction, Qin said. You might not necessarily need a law. For example, if I can perfectly predict a planetary orbit, I dont need to know Newtons laws of gravitation and motion. You could argue that by doing so you would understand less than if you knew Newtons laws. In a sense, that is correct. But from a practical point of view, making accurate predictions is not doing anything less.

Machine learning could also open up possibilities for more research. It significantly broadens the scope of problems that you can tackle because all you need to get going is data, Palmerduca said.

The technique could also lead to the development of a traditional physical theory. While in some sense this method precludes the need of such a theory, it can also be viewed as a path toward one, Palmerduca said. When youre trying to deduce a theory, youd like to have as much data at your disposal as possible. If youre given some data, you can use machine learning to fill in gaps in that data or otherwise expand the data set.

Reference: Machine learning and serving of discrete field theories by Hong Qin, 9 November 2020, Scientific Reports.DOI: 10.1038/s41598-020-76301-0

Here is the original post:
New Machine Learning Theory Raises Questions About the Very Nature of Science - SciTechDaily