Media Search:



What Is Machine Learning? | PCMag

In December 2017, DeepMind, the research lab acquired by Google in 2014, introduced AlphaZero, an artificial intelligence program that could defeat world champions at several board games.

Interestingly, AlphaZero received zero instructions from humans on how to play the games (hence the name). Instead, it used machine learning, a branch of AI that develops its behavior through experience instead of explicit commands.

Within 24 hours, AlphaZero achieved superhuman performance in chess and defeated the previous world-champion chess program. Shortly after, AlphaZero's machine-learning algorithm also mastered Shogi (Japanese chess) and the Chinese board game Go, and it defeated its predecessor, AlphaGo, 100 to zero.

Machine learning has become popular in recent years and is helping computers solve problems previously thought to be the exclusive domain of human intelligence. And even though it's still a far shot from the original vision of artificial intelligence, machine learning has gotten us much closer to the ultimate goal of creating thinking machines.

Traditional approaches to developing artificial intelligence involve meticulously coding all the rules and knowledge that define an AI agent's behavior. When creating rule-based AI, developers must write instructions that specify how the AI should behave in response to every possible situation. This rule-based approach, also known as good old-fashioned AI (GOFAI) or symbolic AI, tries to mimic the human mind's reasoning and knowledge representation functions.

A perfect example of symbolic AI is Stockfish, a top-ranking, open-source chess engine more than 10 years in the making. Hundreds of programmers and chess players have contributed to Stockfish and helped develop its logic by coding its rulesfor example, what the AI should do when the opponent moves its knight from B1 to C3.

if a developer creates programs and interaction the traditional way. They do that by being base skil

But rule-based AI often breaks when dealing with situations where the rules are too complex and implicit. Recognizing speech and objects in images, for instance, are advanced operations that can't be expressed in logical rules.

As opposed to symbolic AI, machine-learning AI models are developed not by writing rules but by gathering examples. For instance, to create a machine learningbased chess engine, a developer creates a base algorithm and then "trains" it with data from thousands of previously played chess games. By analyzing the data, the AI finds common patterns that define winning strategies, which it can use to defeat real opponents.

The more games the AI reviews, the better it becomes at predicting winning moves during play. This is why machine learning is defined as a program whose performance improves with experience.

Machine learning is applicable to many real-world tasks, including image classification, voice recognition, content recommendation, fraud detection, and natural language processing.

Depending on the problem they want to solve, developers prepare relevant data to build their machine-learning model. For instance, if they wanted to use machine learning to detect fraudulent bank transactions, developers would compile a list of existing transactions and label them with their outcome (fraudulent or valid). When they feed the data to the algorithm, it separates the fraudulent and valid transactions and finds the common characteristics within each of the two classes. The process of training models with annotated data is called "supervised learning" and is currently the dominant form of machine learning.

Many online repositories of labeled data for different tasks already exist. Some popular examples are ImageNet, an open-source dataset of more than 14 million labeled images, and MNIST, a dataset of 60,000 labeled handwritten digits. Machine-learning developers also use platforms such as Amazon's Mechanical Turk, an online, on-demand hiring hub for performing cognitive tasks such as labeling images and audio samples. And a growing sector of startups specialize in data annotation.

But not all problems require labeled data. Some machine-learning problems can be solved through "unsupervised learning," where you provide the AI model with raw data and let it figure out for itself which patterns are relevant.

A common use of unsupervised learning is anomaly detection. For instance, a machine-learning algorithm can train on the raw network-traffic data of an internet-connected devicesay, a smart fridge. After training, the AI establishes a baseline for the device and can flag outlier behavior. If the device becomes infected with malware and starts communicating with malicious servers, the machine-learning model will be able to detect it, because the network traffic is different from the normal behavior observed during training.

By now, you probably know that quality training data plays a huge role in the efficiency of machine learning models. But reinforcement learning is a specialized type of machine learning in which an AI develops its behavior without using previous data.

Reinforcement-learning models start with a clean slate. They're instructed only on their environment's basic rules and the task at hand. Through trial and error, they learn to optimize their actions for their goals.

DeepMind's AlphaZero is an interesting example of reinforcement learning. As opposed to other machine-learning models, which must see how humans play chess and learn from them, AlphaZero started only knowing the pieces' moves and the game's win conditions. After that, it played millions of matches against itself, starting with random actions and gradually developing behavioral patterns.

Reinforcement learning is a hot area of research. It's the main technology used to develop AI models that can master complex games such as Dota 2 and StarCraft 2 and is also used to solve real-life problems such as managing data center resources and creating robotic hands that can handle objects with human-like dexterity.

Deep learning is another popular subset of machine learning. It uses artificial neural networks, software constructions that are roughly inspired by the biological structure of the human brain.

Neural networks excel at processing unstructured data such as images, video, audio, and long excerpts of text such as articles and research papers. Before deep learning, machine-learning experts had to put a lot of effort into extracting features from images and videos and would run their algorithms on top of that. Neural networks automatically detect those features without requiring much effort from human engineers.

Deep learning is behind many modern AI technologies such as driverless cars, advanced translation systems, and the facial-recognition tech in your iPhone X.

People often confuse machine learning with human-level artificial intelligence, and the marketing departments of some companies intentionally use the terms interchangeably. But while machine learning has taken great strides toward solving complex problems, it is still very far from creating the thinking machines envisioned by the pioneers of AI.

In addition to learning from experience, true intelligence requires reasoning, common sense, and abstract thinkingareas in which machine learning models perform very poorly.

For instance, while machine learning is good at complicated pattern-recognition tasks such as predicting breast cancer five years in advance, it struggles with simpler logic and reasoning tasks such as solving high-school math problems.

Machine learning's lack of reasoning power makes it bad at generalizing its knowledge. For instance, a machine-learning agent that can play Super Mario 3 like a pro won't dominate another platform game, such as Mega Man, or even another version of Super Mario. It would need to be trained from scratch.

Without the power to extract conceptual knowledge from experience, machine-learning models require tons of training data to perform. Unfortunately, many domains lack sufficient training data or don't have the funds to acquire more. Deep learning, which is now the prevalent form of machine learning, also suffers from an explainability problem: Neural networks work in complicated ways, and even their creators struggle to follow their decision-making processes. This makes it difficult to use the power of neural networks in settings where there's a legal requirement to explain AI decisions.

Fortunately, efforts are being made to overcome machine learning's limits. One notable example is a widespread initiative by DARPA, the Department of Defense's research arm, to create explainable AI models.

Other projects aim to reduce machine learning's over-reliance on annotated data and make the technology accessible to domains with limited training data. Researchers at IBM and MIT recently made inroads in the field by combining symbolic AI with neural networks. Hybrid AI models require less data for training and can provide step-by-step explanations of their decisions.

Whether the evolution of machine learning will eventually help us reach the ever-elusive goal of creating human-level AI remains to be seen. But what we know for sure is that thanks to advances in machine learning, the devices sitting on our desks and resting in our pockets are getting smarter every day.

Sign up for What's New Now to get our top stories delivered to your inbox every morning

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.

More:
What Is Machine Learning? | PCMag

Reconstructing the Galactic merger history with machine learning – Astrobites

Title: Kraken reveals itself the merger history of the Milky Way reconstructed with the E-MOSAICS simulations

Authors: J. M. Diederik Kruijssen, Joel L. Pfeffer, Melanie Chevance, Ana Bonaca, Sebastian Trujillo-Gomez, Nate Bastian, Marta Reina-Campos, Robert A. Crain, and Meghan E. Hughes

First Authors Institution: Astronomisches Rechen-Institut, Zentrum fur Astronomie der Universitat Heidelberg

Status: Published in MNRAS [open access]

Just like archaeologists can trace the migration and assimilation of people in past societies, astronomers can reconstruct the assembly history of the Galaxy that we live in. In standard galaxy formation theory, galaxies like our Milky Way formed through the hierarchical merging of many smaller galaxies. According to this picture, some of the stars and star clusters in our Galaxy were not originally born here, but are immigrants that were brought into the Milky Way when their parent galaxy entered. Galactic archaeologists are developing techniques to trace back the origin of these galactic immigrants and reconstruct properties of the accreted galaxies. One avenue is through the stars that were left behind in a stream (see this Astrobite), but todays authors study where the star clusters in our galaxy come from.

Globular clusters consist of hundreds of thousands of tightly bound stars, and they are both ancient (billions of years old) and stable. When a satellite galaxy is accreted into the Milky Way, its globular clusters are likely to survive and migrate as a whole. This makes globular clusters excellent fossil records, because they preserve the metallicity of the environment in which they formed and carry this signature wherever they travel.

Astronomers can identify accreted globular clusters based on their age and metallicity. Figure 1, taken from todays paper, shows the observed age-metallicity relation for Milky Way globular clusters. The globular clusters on the main progenitor branch are clearly separated from those accreted from various satellite galaxies. The main progenitor of the Milky Way contains all the native stars and globular clusters that were not from satellite galaxies.

Figure 1. The age-metallicity distribution of Galactic globular clusters. In all panels, black points indicate globular clusters that formed in the Main progenitor while colored diamonds indicate globular clusters from each accreted satellite galaxy. The vertical line represents the inferred accretion time. Reproduced from Fig 3 in the paper.

The data points in Figure 1 only contain the observed properties of globular clusters, and there is no obvious connection to the satellite accretion events. To bridge this gap, the authors in todays paper make use of galaxy formation simulations. The E-MOSAICS simulations follow the co-formation and co-evolution of galaxies and their globular clusters, providing the crucial link between accretion history and globular cluster properties.

The authors train an artificial neural network to infer the progenitor galaxy that brought in a group of globular clusters. Specifically, the input parameters are the median and interquartile ranges (IQRs) of the globular cluster orbital radii, eccentricities, ages, and metallicities, and the networks can predict the accretion time and the galaxy stellar mass. The resulting neural network is applied to the globular clusters shown in Figure 1 and gives the accretion times as outputs.

After applying the artificial neural network, the reconstructed formation history of the Milky Way is shown in Figure 2. This figure is called a galaxy merger tree because each branch (black and grey lines) represents one accreted satellite galaxy, and the branches are ordered by their accretion times. Kraken was the first galaxy to be accreted, followed by the progenitor of the Helmi streams, Sequoia and Gaia-Enceladus, and finally Sagittarius.

Figure 2. Galaxy merger tree of the Milky Way. The main progenitor is denoted by the trunk of the tree, coloured by stellar mass. Black lines indicate the five identified (and likely most massive) satellites, with the shaded areas visualizing the probability distributions of the accretion times. The coloured circles indicate the stellar masses of the satellite galaxies at the time of accretion. The annotations list the minimum number of GCs brought in by each satellite. From left to right, the six images along the top of the figure indicate the identified progenitors, i.e. Sagittarius, Sequoia, Kraken, the Milky Ways Main progenitor, the progenitor of the Helmi streams, and Gaia-Enceladus. Reproduced from Fig 9 in the paper.

The main progenitor of the Milky Way is the trunk of the tree, and it grows in stellar mass each time it accretes a new galaxy. The thickness of the lines indicate the mass ratio of the accreted galaxy versus the main progenitor. As you can imagine, the more massive a satellite is, the more damage it causes when it combines with the Milky Way. In a minor merger (defined by mass ratios smaller than 1:4), the satellite is small enough for the Milky Way to comfortably absorb; however, in major mergers (where the two galaxies have comparable mass), both galaxies will be significantly disturbed and the Milky Way disk can even be destroyed. Luckily, the Milky Way never experienced a major merger according to the authors of todays paper. Among the minor mergers, Kraken was the most significant merging event that the Milky Way experienced, since it has the highest mass ratio at the time of accretion.

The authors tally up the total contribution of stellar mass and globular clusters from the accreted satellites. They find that only a few percent of the stellar mass and about 35-50% of globular clusters in the Milky Way were accreted. The rest formed inside the Milky Way. They conclude that the Milky Way had an unusually quiet formation history.

Todays paper uses globular clusters and artificial neural networks to reconstruct a detailed accretion history of our Galaxy. No Indiana Jones required for galactic archaeology!

Astrobite edited by Roan Haggar

Featured image credit: Diederik Kruijssen

About Zili ShenHi! I am a Ph.D. student in Astronomy at Yale University. My research focuses on ultra-diffuse galaxies and their globular cluster populations. Since I came to Yale, I have worked on two "dark-matter-free" galaxies NGC1052-DF2 and DF4. I have been coping with the pandemic and working from home by making sourdough bread and baking various cookies and cakes, reading books ranging from philosophy to virology, going on daily hikes or runs, and watching too many TV shows.

Read the rest here:
Reconstructing the Galactic merger history with machine learning - Astrobites

Democrats eye a creative approach to passing immigration reform – MSNBC

Congressional Democrats and the Biden White House have made no secret of their interest in passing a sweeping immigration reform package. Among the biggest hurdles, of course, is the same obstacle to passing nearly all legislation: Senate Republicans will try to block any reform bill, and coming up with a 60-vote supermajority is practically impossible.

But what if the Democratic majority could circumvent a GOP filibuster by using the budget reconciliation process -- the same method the party used to pass the COVID relief package?

In early April, House Speaker Nancy Pelosi (D-Calif.) suggested Dems are prepared to do exactly that. Two weeks later, a group of Hispanic lawmakers met privately with President Joe Biden, and after the discussion, Rep. Darren Soto (D-Fla.) told Politico that Biden told the group he generally "supports passing certain immigration reforms by reconciliation if we can't get the 10 Republican votes."

Last week, Sen. Patty Murray (D-Wash.), the #3 Democrat in the Senate leadership, raised a few eyebrows with a press release in which she said, "After years of working to reach agreement on a solution, it's clear to me we can't miss the opportunity to act in this critical moment. We need to look at every legislative path possible to get comprehensive immigration reform done -- including through reconciliation."

It's against this backdrop that the New York Times reported overnight that Senate Majority Leader Chuck Schumer (D-N.Y.) is "quietly considering" the procedural gambit.

Mr. Schumer has privately told members of the Congressional Hispanic Caucus in recent weeks that he is "actively exploring" whether it would be possible to attach a broad revision of immigration laws to President Biden's infrastructure plan and pass it through a process known as budget reconciliation, according to two people briefed on his comments.

It's worth emphasizing that this would likely be Plan B for Democratic leaders. Plan A is the ongoing negotiating process underway among a bipartisan group of 15 senators, exploring the possibility of a compromise agreement.

Such a deal appears unlikely. Indeed, the Times' report added that observers have watched the negotiations "drag on with little agreement in sight." There's no great mystery as to why.

Sen. John Cornyn (R-Texas), one of the 15 senators involved in the bipartisan talks, said, "Before we can do anything meaningful on immigration, we're going to have to deal with the current crisis at the border."

If this seems like hollow rhetoric, it's not your imagination. For much of the last two decades, conservative Republicans have said there's a "crisis" that needs to be resolved before GOP lawmakers will consider reform legislation. And every time border security is strengthened, those same Republicans insist it's not enough.

Indeed, let's not forget that GOP members promised then-President Barack Obama that they'd consider a comprehensive immigration solution if he vastly improved border security. The Democrat held up his end of the bargain; the Senate passed the "Gang of Eight" bill; but House Republicans ended up killing the reform effort anyway, offering nothing as an alternative. (See Chapter 6 of my book.)

The GOP position has a Zeno's paradox-like problem: There's no way to ever actually reach the point at which Republicans are satisfied that the "crisis" has been fully resolved. As Greg Sargent noted this morning, "Does anybody imagine there will come a point when Republicans will say, 'Okay, Biden's totally got the border under control now, so let's get serious about working with Democrats on legalizing a lot of immigrants'? Of course not."

But then there's an entirely different question to consider: Is it even procedurally possible to pursue immigration reform through the budget reconciliation process, which is supposed to be limited to matters of taxes and spending? I've been skeptical, but the Times' report included an important detail from 16 years ago that I'd forgotten about:

A team of immigration activists and researchers as well as congressional aides is exploring the question, digging into the best way to present their case to [Senate Parliamentarian Elizabeth MacDonough].... They have found past precedents, including one from 2005, in which changes to immigration policy were allowed as part of a budget-reconciliation package, and they are tallying up the budgetary effects of the immigration proposals which total in the tens of billions. Researchers have dredged up supportive quotes from Republicans from 2005, when they won signoff for including a measure to recapture unused visas for high-skilled workers in a reconciliation package.

There's no shortage of unanswered questions related to process, politics, and procedure, and it'll take a while before the answers come into focus. But for now, it's clear that Democratic leaders are committed to the effort, and the door to immigration reform is not yet closed. Watch this space.

See original here:
Democrats eye a creative approach to passing immigration reform - MSNBC

All The Machine Learning Libraries Open-Sourced By Facebook Ever – Analytics India Magazine

Today, corporations like Google, Facebook and Microsoft have been dominating tools and deep learning frameworks that AI researchers use globally. Many of their open-source libraries are now gaining popularity on GitHub, which is helping budding AI developers across the world build flexible and scalable machine learning models.

From conversational chatbot, self-driving cars to the weather forecast and recommendation systems, AI developers are experimenting with various neural network architectures, hyperparameters, and other features to fit the hardware constraints of edge platforms. The possibilities are endless. Some of the popular deep learning frameworks include Googles TensorFlow and Facebooks Caffe2, PyTorch, Torchcraft AI and Hydra, etc.

According to Statista, AI business operations global revenue is expected to touch $10.8 billion by 2023, and the natural language processing (NLP) market size globally is expected to reach $43.3 billion by 2025. With the rise of AI adoption across businesses, the need for open-source libraries and architecture will only increase in the coming months.

Advancing in artificial intelligence, Facebook AI Research (FAIR) at present is leading the AI race with the launch of state of the art technology tools, libraries and frameworks to bolster machine learning and AI applications across the globe.

Source: Analytics India Magazine

Here are some of the latest open-source tools, libraries and architecture developed by Facebook:

PyTorch is the most widely used deep learning framework, besides Caffe2 and Hydra, which helps researchers build flexible machine learning models.

PyTorch provides a Python package for high-level features like tensor computation (NumPy) with strong GPU acceleration and TorchScript for an easy transition between eager mode and graph mode. Its latest release provides graph-based execution, distributed training, mobile deployment and more.

Flashlight is an open-source machine learning library that lets users execute AI/ML applications using C++ API. Since it supports research in C++, Flashlight does not need external figures or bindings to perform tasks such as threading, memory mapping, or interoperating with low-level hardware. Thus, making the integration of code fast, direct and straightforward.

Opacus is an open-source high-speed library for training PyTorch models with differential privacy (DP). The library is claimed to be more scalable than existing methods. It supports training with minimal code changes and has little impact on training performance. It also allows the researchers to track the privacy budget expended at any given moment.

PyTorch3D is a highly modular and optimised library that offers efficient, reusable components for 3D computer vision research with the PyTorch framework. It is designed to integrate smoothly with deep learning methods for predicting and manipulating 3D data. As a result, the library can be implemented using PyTorch tensors, handle mini-batches of heterogeneous data, and utilise GPUs for acceleration.

Detectron2 is a next-generation library that provides detection and segmentation algorithms. It is a fusion of Detectron and maskrcnn-benchmark. Currently, it supports several computer vision research work and applications. Detection can be used on Mask R-CNN, RetinaNet, Faster R-CNN, RPN, TensorMask as well.

Detectron is an open-source software architecture that implements object detection algorithms like Mask R-CNN. The software has been written in Python and powered by the Caffe2 deep learning framework.

Detectron has enabled various research project at Facebook, including Feature pyramid networks for object detection, Mask R-CNN, non-local neural networks, detecting and recognising human-object interactions, learning to segment everything, data distillation: towards Omni-supervised learning, focal loss for dense object detection, DensePose: dense human pose estimation in the wild, and others.

Prophet is an open-source architecture released by Facebooks core data science team. It is a procedure for forecasting time series data based on an additive model where non-linear trends fit yearly, weekly, and daily seasonality, plus holiday effects. The model works best with time-series data, which has several seasons of historical data such as weather records, economic indicators and patient health evolution metrics.

The code is available on CRAN and PyPI.

Classy Vision is a new end-to-end PyTorch-based framework for large-scale training of image and video classification models. Unlike other computer vision (CV) libraries, Classy Vision claims to offer flexibility for researchers.

Typically, most CV libraries lead to duplicative efforts and require users to migrate research between frameworks and relearn the minutiae of efficient distributed training and data loading. On the other hand, Facebooks PyTorch-based CV framework claimed to offer a better solution for training at scale and deploying to production.

BoTorch is a library for Bayesian optimization built on the PyTorch framework. Bayesian optimization is a sequence design strategy for machines that do not assume any functional forms.

BoTorch seamlessly provides a modular and easily extensible interface for composing Bayesian optimization primitives such as probabilistic models, acquisition functions and optimizers and others. In addition to this, it also enables seamless integration with deep or convolutional architectures in PyTorch.

FastText is an open-source library for efficient text classification and representation learning. It works on standard and generic hardware. Machine learning models can be further reduced on mobile devices as well.

TC is a fully-functional C++ library that automatically synthesises high-performance machine learning kernels using Halide, ISL, NVRTC or LLVM. The library can be easily integrated with Caffe2 and PyTorch and has been designed to be highly portable and machine-learning framework agnostic. Also, it requires a simple tensor library with memory allocation, offloading, and synchronisation capabilities.

Here is the original post:
All The Machine Learning Libraries Open-Sourced By Facebook Ever - Analytics India Magazine

Joe Biden urges US Congress to pass comprehensive immigration reform – Business Standard

US President Joe Biden has urged Congress to pass the comprehensive immigration reform, asserting that immigrants have done so much for America during the pandemic as they have throughout the country's history.

On the day one of his presidency, Biden sent a comprehensive immigration bill to Congress which proposes major overhauls to the system, including granting legal status and a path to citizenship to tens of thousands of undocumented immigrants and other groups and reduce the time that family members must wait outside the US for the much-sought green cards.

Immigrants have done so much for America during the pandemic as they have throughout our history. The country supports immigration reform. Congress should act, Biden said in his maiden address to a joint session of the US Congress on Wednesday.

Immigration has always been essential to America. Let's end our exhausting war over immigration. For more than 30 years, politicians have talked about immigration reform and done nothing about it. It's time to fix it, he said.

He said that on day one of his presidency, he kept his commitment and sent a comprehensive immigration bill to Congress.

"If you believe we need a secure border pass it. If you believe in a pathway to citizenship pass it. If you actually want to solve the problem I have sent you a bill, now pass it, he said amidst applause from the lawmakers.

We also have to get at the root of the problem of why people are fleeing to our southern border from Guatemala, Honduras, El Salvador. The violence. The corruption. The gangs. The political instability. Hunger. Hurricanes. Earthquakes. When I was Vice President, I focused on providing the help needed to address these root causes of migration, he said.

This, he said, helped keep people in their own countries instead of being forced to leave.

Our plan worked. But the last administration shut it down. I'm restoring the programme and asked Vice President (Kamala) Harris to lead our diplomatic efforts. I have absolute confidence she will get the job done. Now, if Congress won't pass my plan let's at least pass what we agree on, he said.

Biden said the Congress needs to pass legislation this year to finally secure protection for the Dreamers the young people who have only known America as their home.

And, permanent protections for immigrants on temporary protected status who come from countries beset by manmade and natural made violence and disaster.

As well as a pathway to citizenship for farmworkers who put food on our tables, he said.

New York immigrant rights advocates, led by the New York Immigration Coalition (NYIC), praised Biden's renewed commitment to a pathway to citizenship for Dreamers, Temporary Protected Status holders, and essential workers.

In a statement, the NYIC also double-downed on their call for a transformation of the country's immigration system, a promise of the Biden campaign.

FWD.us President Todd Schulte said that in his address Biden made clear the urgent need to provide millions of deserving immigrants with a desperately needed pathway to citizenship that will keep families across the country safe and together.

Millions of Dreamers, TPS (Temporary Protected Status) holders, farmworkers, and other undocumented immigrants have been vital to our nation's continued health response and economic recovery from the COVID-19 pandemic.

"They have deep roots in our communities as our neighbours, colleagues and friends, and nearly 6 million US citizen children live with an undocumented family member. Undocumented people are essential to our nation in every sense of the word, he said.

Earlier in the day, a coalition of immigration advocacy groups announced a new USD 50 million campaign aimed at pressuring lawmakers from both parties to pass a pathway to citizenship.

The effort includes a USD 30 million commitment from the We Are Home campaign led by advocacy organisations, as well as a USD 20 million commitment from a handful of other immigration groups including FWD.us.

(Only the headline and picture of this report may have been reworked by the Business Standard staff; the rest of the content is auto-generated from a syndicated feed.)

Read this article:
Joe Biden urges US Congress to pass comprehensive immigration reform - Business Standard