Archive for the ‘Machine Learning’ Category

Neural networks facilitate optimization in the search for new materials – MIT News

When searching through theoretical lists of possible new materials for particular applications, such as batteries or other energy-related devices, there are often millions of potential materials that could be considered, and multiple criteria that need to be met and optimized at once. Now, researchers at MIT have found a way to dramatically streamline the discovery process, using a machine learning system.

As a demonstration, the team arrived at a set of the eight most promising materials, out of nearly 3 million candidates, for an energy storage system called a flow battery. This culling process would have taken 50 years by conventional analytical methods, they say, but they accomplished it in five weeks.

The findings are reported in the journal ACS Central Science, in a paper by MIT professor of chemical engineering Heather Kulik, Jon Paul Janet PhD 19, Sahasrajit Ramesh, and graduate student Chenru Duan.

The study looked at a set of materials called transition metal complexes. These can exist in a vast number of different forms, and Kulik says they are really fascinating, functional materials that are unlike a lot of other material phases. The only way to understand why they work the way they do is to study them using quantum mechanics.

To predict the properties of any one of millions of these materials would require either time-consuming and resource-intensive spectroscopy and other lab work, or time-consuming, highly complex physics-based computer modeling for each possible candidate material or combination of materials. Each such study could consume hours to days of work.

Instead, Kulik and her team took a small number of different possible materials and used them to teach an advanced machine-learning neural network about the relationship between the materials chemical compositions and their physical properties. That knowledge was then applied to generate suggestions for the next generation of possible materials to be used for the next round of training of the neural network. Through four successive iterations of this process, the neural network improved significantly each time, until reaching a point where it was clear that further iterations would not yield any further improvements.

This iterative optimization system greatly streamlined the process of arriving at potential solutions that satisfied the two conflicting criteria being sought. This kind of process of finding the best solutions in situations, where improving one factor tends to worsen the other, is known as a Pareto front, representing a graph of the points such that any further improvement of one factor would make the other worse. In other words, the graph represents the best possible compromise points, depending on the relative importance assigned to each factor.

Training typical neural networks requires very large data sets, ranging from thousands to millions of examples, but Kulik and her team were able to use this iterative process, based on the Pareto front model, to streamline the process and provide reliable results using only the few hundred samples.

In the case of screening for the flow battery materials, the desired characteristics were in conflict, as is often the case: The optimum material would have high solubility and a high energy density (the ability to store energy for a given weight). But increasing solubility tends to decrease the energy density, and vice versa.

Not only was the neural network able to rapidly come up with promising candidates, it also was able to assign levels of confidence to its different predictions through each iteration, which helped to allow the refinement of the sample selection at each step. We developed a better than best-in-class uncertainty quantification technique for really knowing when these models were going to fail, Kulik says.

The challenge they chose for the proof-of-concept trial was materials for use in redox flow batteries, a type of battery that holds promise for large, grid-scale batteries that could play a significant role in enabling clean, renewable energy. Transition metal complexes are the preferred category of materials for such batteries, Kulik says, but there are too many possibilities to evaluate by conventional means. They started out with a list of 3 million such complexes before ultimately whittling that down to the eight good candidates, along with a set of design rules that should enable experimentalists to explore the potential of these candidates and their variations.

Through that process, the neural net both gets increasingly smarter about the [design] space, but also increasingly pessimistic that anything beyond what weve already characterized can further improve on what we already know, she says.

Apart from the specific transition metal complexes suggested for further investigation using this system, she says, the method itself could have much broader applications. We do view it as the framework that can be applied to any materials design challenge where you're really trying to address multiple objectives at once. You know, all of the most interesting materials design challenges are ones where you have one thing you're trying to improve, but improving that worsens another. And for us, the redox flow battery redox couple was just a good demonstration of where we think we can go with this machine learning and accelerated materials discovery.

For example, optimizing catalysts for various chemical and industrial processes is another kind of such complex materials search, Kulik says. Presently used catalysts often involve rare and expensive elements, so finding similarly effective compounds based on abundant and inexpensive materials could be a significant advantage.

This paper represents, I believe, the first application of multidimensional directed improvement in the chemical sciences, she says. But the long-term significance of the work is in the methodology itself, because of things that might not be possible at all otherwise. You start to realize that even with parallel computations, these are cases where we wouldn't have come up with a design principle in any other way. And these leads that are coming out of our work, these are not necessarily at all ideas that were already known from the literature or that an expert would have been able to point you to.

This is a beautiful combination of concepts in statistics, applied math, and physical science that is going to be extremely useful in engineering applications, says George Schatz, a professor of chemistry and of chemical and biological engineering at Northwestern University, who was not associated with this work. He says this research addresses how to do machine learning when there are multiple objectives. Kuliks approach uses leading edge methods to train an artificial neural network that is used to predict which combination of transition metal ions and organic ligands will be best for redox flow battery electrolytes.

Schatz says this method can be used in many different contexts, so it has the potential to transform machine learning, which is a major activity around the world.

The work was supported by the Office of Naval Research, the Defense Advanced Research Projects Agency (DARPA), the U.S. Department of Energy, the Burroughs Wellcome Fund, and the AAAS Mar ion Milligan Mason Award.

Read more here:
Neural networks facilitate optimization in the search for new materials - MIT News

Deep Learning: What You Need To Know – Forbes

AI (artificial Intelligence) concept.

During the past decade, deep learning has seen groundbreaking developments in the field of AI (Artificial Intelligence). But what is this technology? And why is it so important?

Well, lets first get a definition of deep learning.Heres how Kalyan Kumar, who is the Corporate Vice President & Chief Technology Officer of IT Services at HCL Technologies, describes it:Have you ever wondered how our brain can recognize the face of a friend whom you had met years ago or can recognize the voice of your mother among so many other voices in a crowded marketplace or how our brain can learn, plan and execute complex day-to-day activities? The human brain has around 100 billion cells called neurons. These build massively parallel and distributed networks, through which we learn and carry out complex activities. Inspired from these biological neural networks, scientists started building artificial neural networks so that computers could eventually learn and exhibit intelligence like humans.

Think of it this way:You first will start with a huge amount of unstructured data, say videos.Then you will use a sophisticated model that will process this information and try to determine underlying patterns, which are often not detectable by people.

During training, you define the number of neurons and layers your neural network will be comprised of and expose it to labeled training data, said Brian Cha, who is a Product Manager and Deep Learning evangelist at FLIR Systems.With this data, the neural network learns on its own what is good or bad. For example, if you want the neural network to grade fruits, you would show it images of fruits labeled Grade A, Grade B, Grade C, and so on. The neural network uses this training data to extract and assign weights to features that are unique to fruits labelled good, such as ideal size, shape, color, consistency of color and so on. You dont need to manually define these characteristics or even program what is too big or too small, the neural network trains itself using the training data. The process of evaluating new images using a neural network to make decisions on is called inference. When you present the trained neural network with a new image, it will provide an inference, such as Grade A with 95% confidence.

What about the algorithms?According to Bob Friday, who is the CTO of Mist Systems, a Juniper Networks company, There are two kinds of popular neural network models for different use cases: the Convolutional Neural Network (CNN) model is used in image related applications, such as autonomous driving, robots and image search. Meanwhile, the Recurrent Neural Network (RNN) model is used in most of the Natural Language Processing-based (NLP) text or voice applications, such as chatbots, virtual home and office assistants and simultaneous interpreters and in networking for anomaly detection.

Of course, deep learning requires lots of sophisticated tools.But the good news is that there are many available and some are even free like TensorFlow, PyTorch and Keras.

There are also cloud-based server computer services, said Ali Osman rs, who is the Director of AI Strategy and Strategic Partnerships for ADAS at NXP Semiconductors.These are referred to as Machine Learning as a Service (MLaaS) solutions. The main providers include Amazon AWS, Microsoft Azure, and Google Cloud.

Because of the enormous data loads and complex algorithms, there is usually a need for sophisticated hardware infrastructure.Keep in mind that it can sometimes take days to train a model

The unpredictable process of training neural networks requires rapid on-demand scaling of virtual machine pools, said Brent Schroeder, who is the Chief Technology Officer at SUSE. Container based deep learning workloads managed by Kubernetes can easily be deployed to different infrastructure depending upon the specific needs. An initial model can be developed on a small local cluster, or even an individual workstation with a Jupyter Notebook. But then as training needs to scale, the workload can be deployed to large, scalable cloud resources for the duration of the training. This makes Kubernetes clusters a flexible, cost-effective option for training different types of deep learning workloads.

Deep learning has been shown to be quite efficient and accurate with models.Probably the biggest advantage of deep learning over most other machine learning approaches is that the user does not need to worry about trimming down the number of features used, said Noah Giansiracusa, who is an Assistant Professor of Mathematical Sciences at Bentley University.With deep learning, since the neurons are being trained to perform conceptual taskssuch as finding edges in a photo, or facial features within a facethe neural network is in essence figuring out on its own which features in the data itself should be used.

Yet there are some notable drawbacks to deep learning.One is cost.Deep learning networks may require hundreds of thousands or millions of hand-labeled examples, said Evan Tann, who is the CTO and co-founder of Thankful.It is extremely expensive to train in fast timeframes, as serious players will need commercial-grade GPUs from Nvidia that easily exceed $10k each.

Deep learning is also essentially a black box.This means it can be nearly impossible to understand how the model really works!

This can be particularly problematic in applications that require such documentation like FDA approval of drugs and medical devices, said Dr. Ingo Mierswa, who is the Founder of RapidMiner.

And yes, there are some ongoing complexities with deep learning models, which can create bad outcomes.Say a neural network is used to identify cats from images, said Yuheng Chen, who is the COO of rct studio.It works perfectly, but when we want it to identify cats and dogs at the same time, its performance collapses.

But then again, there continues to be rapid progress, as companies continue to invest substantial amounts into deep learning.For the most part, things are still very much in the nascent stages.

The power of deep learning is what allows seamless speech recognition, image recognition, and automation and personalization across every possible industry today, so it's safe to say that you are already experiencing the benefits of deep learning, said Sajid Sadi, who is the VP of Research at Samsung and the Head of Think Tank Team.

Tom (@ttaulli) is the author of Artificial Intelligence Basics: A Non-Technical Introduction and The Robotic Process Automation Handbook: A Guide to Implementing RPA Systems.

See the rest here:
Deep Learning: What You Need To Know - Forbes

What are the top AI platforms? – Gigabit Magazine – Technology News, Magazine and Website

Business Overview

Microsoft AI is a platform used to develop AI solutions in conversational AI, machine learning, data sciences, robotics, IoT, and more.

Microsoft AI prides itself on driving innovation through; protecting wildlife, better brewing, feeding the world and preserving history.

Its Cognitive Services is described as a comprehensive family of AI services and cognitive APIs to help you build intelligent apps.

Executives

Tom Bernard Krake is the Azure Cloud Executive at Microsoft, responsible for leveraging and evaluating the Azure platform. Tom is joined by a team of experienced executives to optimise the Azure platform and oversee the many cognitive services that it provides.

Notable customers

Uber uses Cognitive Services to boost its security through facial recognition to ensure that the driver using the app matches the user that is on file.

KPMG helps financial institutions save millions in compliance costs through the use of Microsofts Cognitive Services. They do this through transcribing and logging thousands of hours of calls, reducing compliance costs by as much as 80 per cent.

Jet.com uses Cognitive Services to provide answers to its customers by infusing its customer chatbot with the intelligence to communicate using natural language.

The services:

Decision - Make smarter decisions faster through anomaly detectors, content moderators and personalizers.

Language - Extract meaning from unstructured text through the immersive reader, language understanding, Q&A maker, text analytics and translator text.

Speech - Integrate speech processing into apps and services through Speech-to-text, Text to speech, Speech translation and Speaker recognition.

Vision - Identify and analyse content within images, videos and digital ink through computer vision, custom vision, face, form recogniser, ink recogniser and video indexer.

Web Search -Find what you are looking for through the world-wide-web through autosuggest, custom search, entity search, image search, news search, spell check, video search, visual search and web search.

Read more:
What are the top AI platforms? - Gigabit Magazine - Technology News, Magazine and Website

With Launch of COVID-19 Data Hub, The White House Issues A ‘Call To Action’ For AI Researchers – Machine Learning Times – machine learning & data…

Originally published in TechCrunch, March 16, 2020

In a briefing on Monday, research leaders across tech, academia and the government joined the White House to announce an open data set full of scientific literature on the novel coronavirus. The COVID-19 Open Research Dataset, known as CORD-19, will also add relevant new research moving forward, compiling it into one centralized hub. The new data set is machine readable, making it easily parsed for machine learning purposes a key advantage according to researchers involved in the ambitious project.

In a press conference, U.S. CTO Michael Kratsios called the new data set the most extensive collection of machine readable coronavirus literature to date. Kratsios characterized the project as a call to action for the AI community, which can employ machine learning techniques to surface unique insights in the body of data. To come up with guidance for researchers combing through the data, the National Academies of Sciences, Engineering, and Medicine collaborated with the World Health Organization to come up with high priority questions about the coronavirus related to genetics, incubation, treatment, symptoms and prevention.

The partnership, announced today by the White House Office of Science and Technology Policy, brings together the Chan Zuckerberg Initiative, Microsoft Research, the Allen Institute for Artificial Intelligence, the National Institutes of Healths National Library of Medicine, Georgetown Universitys Center for Security and Emerging Technology, Cold Spring Harbor Laboratory and the Kaggle AI platform, owned by Google.

The database brings together nearly 30,000 scientific articles about the virus known as SARS-CoV-2. as well as related viruses in the broader coronavirus group. Around half of those articles make the full text available. Critically, the database will include pre-publication research from resources like medRxiv and bioRxiv, open access archives for pre-print health sciences and biology research.

To continue reading this article, click here.

Read the original:
With Launch of COVID-19 Data Hub, The White House Issues A 'Call To Action' For AI Researchers - Machine Learning Times - machine learning & data...

AI Is Changing Work and Leaders Need to Adapt – Harvard Business Review

Executive Summary

Recent empirical research by the MIT-IBM Watson AI Lab provides new insight into how work is changing in the face of AI. Based on this research, the author provides a roadmap for leaders intent on adapting their workforces and reallocating capital, while also delivering profitability. They argue that the key to unlocking the productivity potential while delivering on business objectives lies in three key strategies: rebalancing resources, investing in workforce reskilling and, on a larger scale, advancing new models of education and lifelong learning.

As AI is increasingly incorporated into our workplaces and daily lives, it is poised to fundamentally upend the way we live and work. Concern over this looming shift is widespread. A recent survey of 5,700 Harvard Business School alumni found that 52% of even this elite group believe the typical company will employ fewer workers three years from now.

The advent of AI poses new and unique challenges for business leaders. They must continue to deliver financial performance, while simultaneously making significant investments in hiring, workforce training, and new technologies that support productivity and growth. These seemingly competing business objectives can make for difficult, often agonizing, leadership decisions.

Against this backdrop, recent empirical research by our team at the MIT-IBM Watson AI Lab provides new insight into how work is changing in the face of AI. By examining these findings, we can create a roadmap for leaders intent on adapting their workforces and reallocating capital, while also delivering profitability.

The stakes are high. AI is an entirely new kind of technology, one that has the ability to anticipate future needs and provide recommendations to its users. For business leaders, that unique capability has the potential to increase employee productivity by taking on administrative tasks, providing better pricing recommendations to sellers, and streamlining recruitment, to name a few examples.

For business leaders navigating the AI workforce transition, the key to unlocking the productivity potential while delivering on business objectives lies in three key strategies: rebalancing resources, investing in workforce reskilling and, on a larger scale, advancing new models of education and lifelong learning.

Our research report, offers a window into how AI will change workplaces through the rebalancing and restructuring of occupations. Using AI and machine learning techniques, our MIT-IBM Watson AI Lab team analyzed 170 million online job posts between 2010 and 2017. The studys first implication: While occupations change slowly over years and even decades tasks become reorganized at a much faster pace.

Jobs are a collection of tasks. As workers take on jobs in various professions and industries, it is the tasks they perform that create value. With the advancement of technology, some existing tasks will be replaced by AI and machine learning. But our research shows that only 2.5% of jobs include a high proportion of tasks suitable for machine learning. These include positions like usher, lobby attendant, and ticket taker, where the main tasks involve verifying credentials and allowing only authorized people to enter a restricted space.

Most tasks will still be best performed by humans whether craft workers like plumbers, electricians and carpenters, or those who do design or analysis requiring industry knowledge. And new tasks will emerge that require workers to exercise new skills.

As this shift occurs, business leaders will need to reallocate capital accordingly. Broad adoption of AI may require additional research and development spending. Training and reskilling employees will very likely require temporarily removing workers from revenue-generating activities.

More broadly, salaries and other forms of employee compensation will need to reflect the shifting value of tasks all along the organization chart. Our research shows that as technology reduces the cost of some tasks because they can be done in part by AI, the value workers bring to the remaining tasks increases. Those tasks tend to require grounding in intellectual skill and insightsomething AI isnt as good at as people.

In high-wage business and finance occupations, for example, compensation for tasks requiring industry knowledge increased by more than $6,000, on average, between 2010 and 2017. By contrast, average compensation for manufacturing and production tasks fell by more than $5,000 during that period. As AI continues to reshape the workplace, business leaders who are mindful of this shifting calculus will come out ahead.

Companies today are held accountable not only for delivering shareholder value, but for positively impacting stakeholders such as customers, suppliers, communities and employees. Moreover, investment in talent and other stakeholders is increasingly considered essential to delivering long-term financial results. These new expectations are reflected in the Business Roundtables recently revised statement on corporate governance, which underscores corporations obligation to support employees through training and education that help develop new skills for a rapidly changing world.

Millions of workers will need to be retrained or reskilled as a result of AI over the next three years, according to a recent IBM Institute for Business Value study. Technical training will certainly be a necessary component. As tasks requiring intellectual skill, insight and other uniquely human attributes rise in value, executives and managers will also need to focus on preparing workers for the future by fostering and growing people skills such as judgement, creativity and the ability to communicate effectively. Through such efforts, leaders can help their employees make the shift to partnering with intelligent machines as tasks transform and change in value.

As AI continues to scale within businesses and across industries, it is incumbent upon innovators and business leaders to understand not only the business process implications, but also the societal impact. Beyond the need for investment in reskilling within organizations today, executives should work alongside policymakers and other public and private stakeholders to provide support for education and job training, encouraging investment in training and reskilling programs for all workers.

Our research shows that technology can disproportionately impact the demand and earning potential for mid-wage workers, causing a squeeze on the middle class. For every five tasks that shifted out of mid-wage jobs, we found, four tasks moved to low-wage jobs and one moved to a high-wage job. As a result, wages are rising faster in the low- and high-wage tiers than in the mid-wage tier.

New models of education and pathways to continuous learning can help address the growing skills gap, providing members of the middle class, as well as students and a broad array of mid-career professionals, with opportunities to build in-demand skills. Investment in all forms of education is key: community college, online learning, apprenticeships, or programs like P-TECH, a public-private partnership designed to prepare high school students for new collar technical jobs like cloud computing and cybersecurity.

Whether it is workers who are asked to transform their skills and ways of working, or leaders who must rethink everything from resource allocation to workforce training, fundamental economic shifts are never easy. But if AI is to fulfill its promise of improving our work lives and raising living standards, senior leaders must be ready to embrace the challenges ahead.

Go here to see the original:
AI Is Changing Work and Leaders Need to Adapt - Harvard Business Review