Archive for the ‘Machine Learning’ Category

Machine Learning Will be one of the Best Ways to Identify Habitable Exoplanets – Universe Today

The field of extrasolar planet studies is undergoing a seismic shift. To date, 4,940 exoplanets have been confirmed in 3,711 planetary systems, with another 8,709 candidates awaiting confirmation. With so many planets available for study and improvements in telescope sensitivity and data analysis, the focus is transitioning from discovery to characterization. Instead of simply looking for more planets, astrobiologists will examine potentially-habitable worlds for potential biosignatures.

This refers to the chemical signatures associated with life and biological processes, one of the most important of which is water. As the only known solvent that life (as we know it) cannot exist, water is considered the divining rod for finding life. In a recent study, astrophysicists Dang Pham and Lisa Kaltenegger explain how future surveys (when combined with machine learning) could discern the presence of water, snow, and clouds on distant exoplanets.

Dang Pham is a graduate student with the David A. Dunlap Department of Astronomy & Astrophysics at the University of Toronto, where he specializes in planetary dynamics research. Lisa Kaltenegger is an Associate Professor in Astronomy at Cornell University, the Director of the Carl Sagan Institute, and a world-leading expert in modeling potentially habitable worlds and characterizing their atmospheres.

Water is something that all life on Earth depends on, hence its importance for exoplanet and astrobiological surveys. As Lisa Kaltenegger told Universe Today via email, this importance is reflected in NASAs slogan just follow the water which also inspired the title of their paper:

Liquid water on a planets surface is one of the smoking guns for potential life I say potential here because we dont know what else we need to get life started. But liquid water is a great start. So we used NASAs slogan of Just follow the water and asked, how can we find water on the surface of rocky exoplanets in the Habitable Zone? Doing spectroscopy is time intensive, thus we are searching for a faster way to initially identify promising planets those with liquid water on it.

Currently, astronomers have been limited to looking for Lyman-alpha line absorption, which indicates the presence of hydrogen gas in an exoplanets atmosphere. This is a byproduct of atmospheric water vapor thats been exposed to solar ultraviolet radiation, causing it to become chemically disassociated into hydrogen and molecular oxygen (O2) the former of which is lost to space while the latter is retained.

This is about to change, thanks to next-generation telescopes like the James Webb (JWST) and Nancy Grace Roman Space Telescopes (RST), as well as next-next-generation observatories like the Origins Space Telescope, the Habitable Exoplanet Observatory (HabEx), and the Large UV/Optical/IR Surveyor (LUVOIR). There are also ground-based telescopes like the Extremely Large Telescope (ELT), the Giant Magellan Telescope (GMT), and the Thirty Meter Telescope (TMT).

Thanks to their large primary mirrors and advanced suite of spectrographs, chronographs, adaptive optics, these instruments will be able to conduct Direct Imaging studies of exoplanets. This consists of studying light reflected directly from an exoplanets atmosphere or surface to obtain spectra, allowing astronomers to see what chemical elements are present. But as they indicate in their paper, this is a time-intensive process.

Astronomers start by observing thousands of stars for periodic dips in brightness, then analyzing the light curves for signs of chemical signatures. Currently, exoplanet researchers and astrobiologists rely on amateur astronomers and machine algorithms to sort through the volumes of data their telescopes obtain. Looking ahead, Pham and Kaltenegger show how more advanced machine learning will be crucial.

As they indicate, MI techniques will allow astronomers to conduct the initial characterizations of exoplanets more rapidly, allowing astronomers to prioritize targets for follow-up observations. By following the water, astronomers will be able to dedicate more of an observatorys valuable survey time to exoplanets that are more likely to provide significant returns.

Next-generation telescopes will look for water vapor in the atmosphere of planets and water on the surface of planets, said Kaltenegger. Of course, to find water on the surface of planets, you should look [for water in its] liquid, solid, and gaseous forms, as we did in our paper.

Machine learning allows us to quickly identify optimal filters, as well as the trade-off in accuracy at various signal-to-noise ratios, added Pham. In the first task, using [the open-source algorithm] XGBoost, we get a ranking of which filters are most helpful for the algorithm in its tasks of detecting water, snow, or cloud. In the second task, we can observe how much better the algorithm performs with less noise. With that, we can draw a line where getting more signal would not correspond to much better accuracy.

To make sure their algorithm was up to the task, Pham and Kaltenegger did some considerable calibrating. This consisted of creating 53,130 spectra profiles of a cold Earth with various surface components including snow, water, and water clouds. They then simulated the spectra for this water in terms of atmosphere and surface reflectivity and assigned color profiles. As Pham explained:

The atmosphere was modeled using Exo-Prime2 Exo-Prime2 has been validated by comparison to Earth in various missions. The reflectivity of surfaces like snow and water are measured on Earth by USGS. We then create colors from these spectra. We train XGBoost on these colors to perform three separate goals: detecting the existence of water, the existence of clouds, and the existence of snow.

This trained XGBoost showed that clouds and snow are easier to identify than water, which is expected since clouds and snow have a much higher albedo (greater reflectivity of sunlight) than water. They further identified five optimal filters that worked extremely well for the algorithm, all of which were 0.2 micrometers broad and in the visible light range. The final step was to perform a mock probability assessment to evaluate their planet model regarding liquid water, snow, and clouds from the set of five optimal filters they identified.

Finally, we [performed] a brief Bayesian analysis using Markov-Chain Monte Carlo (MCMC) to do the same task on the five optimal filters, as a non-machine learning method to validate our finding, said Pham. Our findings there are similar: water is harder to detect, but identifying water, snow, and cloud through photometry is feasible.

Similarly, they were surprised to see how well the trained XGBoost could identify water on the surface of rocky planets based on color alone. According to Kaltenegger, this is what filters really are: a means for separating light into discreet bins. Imagine a bin for all red light (the red filter), then a bin for all the green light, from light to dark green (the green filter), she said.

Their proposed method does not identify water in exoplanet atmospheres but on an exoplanets surface via photometry. In addition, it will not work with the Transit Method (aka. Transit Photometry), which is currently the most widely-used and effective means of exoplanet detection. This method consists of observing distant stars for periodic dips in luminosity attributed to exoplanets passing in front of the star (aka. transiting) relative to the observer.

On occasion, astronomers can obtain spectra from an exoplanets atmosphere as it makes a transit a process known as transit spectroscopy. As the suns light passes through the exoplanets atmosphere relative to the observer, astronomers will analyze it with spectrometers to determine what chemicals are there. Using its sensitive optics and suite of spectrometers, the JWST will rely on this method to characterize exoplanet atmospheres.

But as Pham and Kaltenegger indicate, their algorithm will only work with reflected light from the direct imaging of exoplanets. This is especially good news considering that spectroscopy obtained through Direct Imaging studies is likely to reveal more about exoplanets not just the chemical composition of their atmospheres. According to Kaltenegger, this creates all kinds of opportunities for next-generation missions:

This is opening up the opportunity for smaller space missions like the Nancy Roman telescope to help identify worlds that could host life. And for larger upcoming telescopes as recommended by the decadal survey it allows them to scan the rocky planets in the Habitable Zone for the most promising candidates those with water on their surface, so we spend the time characterizing the most interesting ones and effectively search for life on planets that have great conditions for it to get started.

The paper that describes their findings was recently published in the Monthly Notices of the Royal Astronomical Society (MNRAS).

Further Reading: arXiv

Like Loading...

The rest is here:
Machine Learning Will be one of the Best Ways to Identify Habitable Exoplanets - Universe Today

Ames Lab, Texas A&M team develop AI tool for discovery and prediction of new rare-earth compounds – Green Car Congress

Researchers from Ames Laboratory and Texas A&M University have trained a machine-learning (ML) model to assess the stability of new rare-earth compounds. The framework they developed builds on current state-of-the-art methods for experimenting with compounds and understanding chemical instabilities. A paper on their work is published in Acta Materialia.

Machine learning is really important here because when we are talking about new compositions, ordered materials are all very well known to everyone in the rare earth community. However, when you add disorder to known materials, its very different. The number of compositions becomes significantly larger, often thousands or millions, and you cannot investigate all the possible combinations using theory or experiments.

Ames Laboratory Scientist Prashant Singh, corresponding author

The approach is based on machine learning (ML), a form of artificial intelligence (AI), which is driven by computer algorithms that improve through data usage and experience. Researchers used the upgraded Ames Laboratory Rare Earth database (RIC 2.0) and high-throughput density-functional theory (DFT) to build the foundation for their ML model.

High-throughput screening is a computational scheme that allows a researcher to test hundreds of models quickly. DFT is a quantum mechanical method used to investigate thermodynamic and electronic properties of many body systems. Based on this collection of information, the developed ML model uses regression learning to assess phase stability of compounds.

Singh explained that the material analysis is based on a discrete feedback loop in which the AI/ML model is updated using new DFT database based on real-time structural and phase information obtained from experiments. This process ensures that information is carried from one step to the next and reduces the chance of making mistakes.

Singh et al.

Yaroslav Mudryk, the project supervisor, said that the framework was designed to explore rare earth compounds because of their technological importance, but its application is not limited to rare-earths research. The same approach can be used to train an ML model to predict magnetic properties of compounds, process controls for transformative manufacturing, and optimize mechanical behaviors.

Its not really meant to discover a particular compound. It was, how do we design a new approach or a new tool for discovery and prediction of rare earth compounds? And thats what we did.

Yaroslav Mudryk

Mudryk emphasized that this work is just the beginning. The team is exploring the full potential of this method, but they are optimistic that there will be a wide range of applications for the framework in the future.

This work was supported by Laboratory Directed Research and Development Program (LDRD) program at Ames Laboratory.

Resources

Prashant Singh, Tyler Del Rose, Guillermo Vazquez, Raymundo Arroyave, Yaroslav Mudryk (2022) Machine-learning enabled thermodynamic model for the design of new rare-earth compounds, Acta Materialia, Volume 229,117759 doi: 10.1016/j.actamat.2022.117759

The rest is here:
Ames Lab, Texas A&M team develop AI tool for discovery and prediction of new rare-earth compounds - Green Car Congress

Machine Learning Chip Market: Latest Trends and Forecast Analysis Up to 2029 The Sabre – The Sabre

This market report comprises of the most recent market information with which companies can have in depth analysis of industry and future trends. By applying market intelligence for this business report, industry experts assess strategic options, outline successful action plans and support companies with critical bottom-line decisions. Competitive analysis studies of this credible report helps to get ideas about the strategies of key players in the market. Not to mention, the scope of This market research report can be broadened from market scenarios to comparative pricing between major players, cost and profit of the specified market regions.

Get the Sample of this Report with Detail TOC and List ofFigures@https://www.databridgemarketresearch.com/request-a-sample/?dbmr=global-machine-learning-chip-market

Machine Learning Chip Market is expected to reach USD 72.45 billion by 2027 witnessing market growth with the rate of 40.60% in the forecast period of 2020 to 2027.

Introduction of quantum computing, rising applications of machine learning in various industries, adoption of artificial intelligence across the globe, are some of the factors that will likely to enhance the growth of the machine learning chip market in the forecast period of 2020-2027. On the other hand, growing smart cities and smart homes, adoption of internet of things worldwide, technological advancement which will further boost various opportunities that will lead to the growth of the machine learning chip market in the above mentioned forecast period.

Lack of skilled workforce along with phobia related to artificial intelligence are acting as market restraints for machine learning chip in the above mentioned forecaster period.

We provide a detailed analysis of key players operating in the Machine Learning Chip Market:

North America will dominate the machine learning chip market due to the prevalence of majority of manufacturers while Europe will expect to grow in the forecast period of 2020-2027 due to the adoption of advanced technology.

Market Segments Covered:

By Chip Type

Technology

Industry Vertical

Machine Learning Chip Market Country Level Analysis

Machine learning chip market is analysed and market size, volume information is provided by country, chip type, technology and industry vertical as referenced above.

The countries covered in the machine learning chip market report are U.S., Canada and Mexico in North America, Brazil, Argentina and Rest of South America as part of South America, Germany, Italy, U.K., France, Spain, Netherlands, Belgium, Switzerland, Turkey, Russia, Rest of Europe in Europe, Japan, China, India, South Korea, Australia, Singapore, Malaysia, Thailand, Indonesia, Philippines, Rest of Asia-Pacific (APAC) in Asia-Pacific (APAC), Saudi Arabia, U.A.E, South Africa, Egypt, Israel, Rest of Middle East and Africa (MEA) as a part of Middle East and Africa (MEA).

To get Incredible Discounts on this Premium Report, Click Here @https://www.databridgemarketresearch.com/checkout/buy/enterprise/global-machine-learning-chip-market

Rapid Business Growth Factors

In addition, the market is growing at a fast pace and the report shows us that there are a couple of key factors behind that. The most important factor thats helping the market grow faster than usual is the tough competition.

Competitive Landscape and Machine Learning Chip Market Share Analysis

Machine learning chip market competitive landscape provides details by competitor. Details included are company overview, company financials, revenue generated, market potential, investment in research and development, new market initiatives, regional presence, company strengths and weaknesses, product launch, product width and breadth, application dominance. The above data points provided are only related to the companies focus related to machine learning chip market.

Table of Content:

Part 01: Executive Summary

Part 02: Scope of the Report

Part 03: Research Methodology

Part 04: Machine Learning Chip Market Landscape

Part 05: Market Sizing

More.TOC.. ..Continue

Based on geography, the global Machine Learning Chip market report covers data points for 28 countries across multiple geographies namely

Browse TOC with selected illustrations and example pages of Global Machine Learning Chip Market @https://www.databridgemarketresearch.com/toc/?dbmr=global-machine-learning-chip-market

Key questions answered in this report

Get in-depth details about factors influencing the market shares of the Americas, APAC, and EMEA?

Top Trending Reports:

About Data Bridge Market Research:

Data Bridge Market Research set forth itself as an unconventional and neoteric Market research and consulting firm with unparalleled level of resilience and integrated approaches. We are determined to unearth the best market opportunities and foster efficient information for your business to thrive in the market. Data Bridge endeavors to provide appropriate solutions to the complex business challenges and initiates an effortless decision-making process.

Contact:

Data Bridge Market Research

US: +1 888 387 2818

UK: +44 208 089 1725

Hong Kong: +852 8192 7475

Corporatesales@databridgemarketresearch.com

Read this article:
Machine Learning Chip Market: Latest Trends and Forecast Analysis Up to 2029 The Sabre - The Sabre

Machine Learning Reimagines the Building Blocks of Computing – Quanta Magazine

Algorithms the chunks of code that allow programs to sort, filter and combine data, among other things are the standard tools of modern computing. Like tiny gears inside a watch, algorithms execute well-defined tasks within more complicated programs.

Theyre ubiquitous, and in part because of this, theyve been painstakingly optimized over time. When a programmer needs to sort a list, for example, theyll reach for a standard sort algorithm thats been used for decades.

Now researchers are taking a fresh look at traditional algorithms, using the branch of artificial intelligence known as machine learning. Their approach, called algorithms with predictions, takes advantage of the insights machine learning tools can provide into the data that traditional algorithms handle. These tools have, in a real way, rejuvenated research into basic algorithms.

Machine learning and traditional algorithms are two substantially different ways of computing, and algorithms with predictions is a way to bridge the two, said Piotr Indyk, a computer scientist at the Massachusetts Institute of Technology. Its a way to combine these two quite different threads.

The recent explosion of interest in this approach began in 2018 with a paper by Tim Kraska, a computer scientist at MIT, and a team of Google researchers. In it, the authors suggested that machine learning could improve a well-studied traditional algorithm called a Bloom filter, which solves a straightforward but daunting problem.

Imagine you run your companys IT department and you need to check if your employees are going to websites that pose a security risk. Naively, you might think youll need to check every site they visit against a blacklist of known sites. If the list is huge (as is likely the case for undesirable sites on the internet), the problem becomes unwieldly you cant check every site against a huge list in the tiny amount of time before a webpage loads.

The Bloom filter provides a solution, allowing you to quickly and accurately check whether any particular sites address, or URL, is on the blacklist. It does this by essentially compressing the huge list into a smaller list that offers some specific guarantees.

Bloom filters never produce false negatives if they say the site is bad, its bad. However, they can produce false positives, so perhaps your employees wont be able to visit some sites they should have access to. Thats because they trade some accuracy for an enormous amount of data compression a trick called lossy compression. The more that Bloom filters compress the original data, the less accurate they are, but the more space they save.

To a simple Bloom filter, every website is equally suspicious until its confirmed to not be on the list. But not all websites are created equal: Some are more likely than others to wind up on a blacklist, simply because of details like their domain or the words in their URL. People understand this intuitively, which is why you likely read URLs to make sure theyre safe before you click on them.

Kraskas team developed an algorithm that can also apply this kind of logic. They called it a learned Bloom filter, and it combines a small Bloom filter with a recurrent neural network (RNN) a machine learning model that learns what malicious URLs look like after being exposed to hundreds of thousands of safe and unsafe websites.

When the learned Bloom filter checks a website, the RNN acts first and uses its training to determine if the site is on the blacklist. If the RNN says its on the list, the learned Bloom filter rejects it. But if the RNN says the site isnt on the list, then the small Bloom filter gets a turn, accurately but unthinkingly searching its compressed websites.

By putting the Bloom filter at the end of the process and giving it the final say, the researchers made sure that learned Bloom filters can still guarantee no false negatives. But because the RNN pre-filters true positives using what its learned, the small Bloom filter acts more as a backup, keeping its false positives to a minimum as well. A benign website that could have been blocked by a larger Bloom filter can now get past the more accurate learned Bloom filter. Effectively, Kraska and his team found a way to take advantage of two proven but traditionally separate ways of approaching the same problem to achieve faster, more accurate results.

Kraskas team showed that the new approach worked, but they didnt formalize why. That task fell to Michael Mitzenmacher, an expert on Bloom filters at Harvard University, who found Kraskas paper innovative and exciting, but also fundamentally unsatisfying. They run experiments saying their algorithms work better. But what exactly does that mean? he asked. How do we know?

In 2019, Mitzenmacher put forward a formal definition of a learned Bloom filter and analyzed its mathematical properties, providing a theory that explained exactly how it worked. And whereas Kraska and his team showed that it could work in one case, Mitzenmacher proved it could always work.

Mitzenmacher also improved the learned Bloom filters. He showed that adding another standard Bloom filter to the process, this time before the RNN, can pre-filter negative cases and make the classifiers job easier. He then proved it was an improvement using the theory he developed.

The early days of algorithms with predictions have proceeded along this cyclical track innovative ideas, like the learned Bloom filters, inspire rigorous mathematical results and understanding, which in turn lead to more new ideas. In the past few years, researchers have shown how to incorporate algorithms with predictions into scheduling algorithms, chip design and DNA-sequence searches.

In addition to performance gains, the field also advances an approach to computer science thats growing in popularity: making algorithms more efficient by designing them for typical uses.

Currently, computer scientists often design their algorithms to succeed under the most difficult scenario one designed by an adversary trying to stump them. For example, imagine trying to check the safety of a website about computer viruses. The website may be benign, but it includes computer virus in the URL and page title. Its confusing enough to trip up even sophisticated algorithms.

Indyk calls this a paranoid approach. In real life, he said, inputs are not generally generated by adversaries. Most of the websites employees visit, for example, arent as tricky as our hypothetical virus page, so theyll be easier for an algorithm to classify. By ignoring the worst-case scenarios, researchers can design algorithms tailored to the situations theyll likely encounter. For example, while databases currently treat all data equally, algorithms with predictions could lead to databases that structure their data storage based on their contents and uses.

And this is still only the beginning, as programs that use machine learning to augment their algorithms typically only do so in a limited way. Like the learned Bloom filter, most of these new structures only incorporate a single machine learning element. Kraska imagines an entire system built up from several separate pieces, each of which relies on algorithms with predictions and whose interactions are regulated by prediction-enhanced components.

Taking advantage of that will impact a lot of different areas, Kraska said.

Here is the original post:
Machine Learning Reimagines the Building Blocks of Computing - Quanta Magazine

Military researchers to apply artificial intelligence (AI) and machine learning to combat medical triage – Military & Aerospace Electronics

ARLINGTON, Va. U.S. military researchers are asking industry to develop artificial intelligence (AI) and machine learning technologies for difficult jobs like combat medical triage, which refers to sorting wounded warfighters according to their need for medical attention.

Officials of the U.S. Defense Advanced Research Projects Agency (DARPA) in Arlington, Va., issued a broad agency announcement (HR001122S0031) this week for the In the Moment (ITM) project.

DARPA researchers are asking industry to develop algorithmic decision-makers that can help humans with decision-making in difficult domains like combat medical triage.

Difficult domains are where trusted decision-makers disagree; no right answer exists; and uncertainty, time-pressure, resource limitations, and conflicting values create significant decision-making challenges. Other examples include first response and disaster relief.

Related: Top technology challenges this decade for the warfighter

The DARPA ITM project focuses on two areas: small unit triage in austere environments, and mass casualty triage. ITM seeks to develop techniques that enable building, evaluating, and fielding trusted algorithmic decision-makers for mission-critical operations where there is no right answer and, consequently, ground truth does not exist.

Researchers are looking for capabilities that:

-- quantify algorithmic decision-makers with key decision-making attributes of trusted humans;

-- incorporate key human decision-maker attributes into more human-aligned, trusted algorithms;

-- enable the evaluation of human-aligned algorithms in difficult domains where humans disagree and there is no right outcome; and

Difficult decisions occur when the decision-maker is confronted with challenges that include too many or too few options, too much or too little information, uncertainty about the consequences of decisions, and uncertainty about the value of foreseeable outcomes.

ITM seeks to develop AI and machine learning algorithms based on key human attributes as the basis for trust in algorithmic decision-makers, as well as a computational framework for key human attributes and an alignment score match the algorithmic decision-maker to key human decision-makers.

Related: Simulation and mission rehearsal relies on state-of-the-art computing

ITM is interested in the notion of trust, or the willingness of a human to delegate difficult decision-making to AI computers. The project also will focus on human-off-the-loop, algorithmic decision-making in difficult domains to understand the limits of such a computational framework.

ITM is 3.5-year, two-phase program that focuses on four technical areas: decision-maker characterization; human-aligned algorithms; evaluation; and policy and practice.

Decision-maker characterization seeks to develop technologies that identify and model key decision-making attributes of trusted humans to produce a quantitative decision-maker alignment score.

Human-aligned algorithms should be able to balance situational information with a preference for key decision-maker attributes. Evaluation will assess the willingness of humans to delegate difficult decisions to AI computers.

Related: The next 'new frontier' of artificial intelligence

Policy and practice will develop recommendations for how military leaders can update policies to take advantage of AI and machine learning in combat medical triage.

Companies interested should upload abstracts by 30 March 2022, and proposals by 17 May 2022 to the DARPA BAA website at https://baa.darpa.mil/.

Email questions or concerns to Matt Turek, the DARPA ITM program manager, at ITM@darpa.mil. More information is online at https://sam.gov/opp/baae2217401748dbaeb89a08044d6998/view.

More here:
Military researchers to apply artificial intelligence (AI) and machine learning to combat medical triage - Military & Aerospace Electronics