Archive for the ‘Machine Learning’ Category

Reinforcement learning for the real world – TechTalks

This article is part of ourreviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.

Labor- and data-efficiency remain two of the key challenges of artificial intelligence. In recent decades, researchers have proven that big data and machine learning algorithms reduce the need for providing AI systems with prior rules and knowledge. But machine learningand more recently deep learninghave presented their own challenges, which require manual labor albeit of different nature.

Creating AI systems that can genuinely learn on their own with minimal human guidance remain a holy grail and a great challenge. According to Sergey Levine, assistant professor at the University of California, Berkeley, a promising direction of research for the AI community is self-supervised offline reinforcement learning.

This is a variation of the RL paradigm that is very close to how humans and animals learn to reuse previously acquired data and skills, and it can be a great boon for applying AI to real-world settings. In a paper titled Understanding the World Through Action and a talk at the NeurIPS 2021 conference, Levine explained how self-supervised learning objectives and offline RL can help create generalized AI systems that can be applied to various tasks.

One common argument in favor of machine learning algorithms is their ability to scale with the availability of data and compute resources. Decades of work on developing symbolic AI systems have produced limited results. These systems require human experts and engineers to manually provide the rules and knowledge that define the behavior of the AI system.

The problem is that in some applications, the rules can be virtually limitless, while in others, they cant be explicitly defined.

In contrast, machine learning models can derive their behavior from data, without the need for explicit rules and prior knowledge. Another advantage of machine learning is that it can glean its own solutions from its training data, which are often more accurate than knowledge engineered by humans.

But machine learning faces its own challenges. Most ML applications are based on supervised learning and require training data to be manually labeled by human annotators. Data annotation poses severe limits to the scaling of ML models.

More recently, researchers have been exploring unsupervised and self-supervised learning, ML paradigms that obviate the need for manual labels. These approaches have helped overcome the limits of machine learning in some applications such as language modeling and medical imaging. But theyre still faced with challenges that prevent their use in more general settings.

Current methods for learning without human labels still require considerable human insight (which is often domain-specific!) to engineer self-supervised learning objectives that allow large models to acquire meaningful knowledge from unlabeled datasets, Levine writes.

Levine writes that the next objective should be to create AI systems that dont require manual labeling or the manual design of self-supervised objectives. These models should be able to distill a deep and meaningful understanding of the world and can perform downstream tasks with robustness generalization, and even a degree of common sense.

Reinforcement learning is inspired by intelligent behavior in animals and humans. Reinforcement learning pioneer Richard Sutton describes RL as the first computational theory of intelligence. An RL agent develops its behavior by interacting with its environment, weighing the punishments and rewards of its actions, and developing policies that maximize rewards.

RL, and more recently deep RL, have proven to be particularly efficient at solving complicated problems such as playing games and training robots. And theres reason to believe reinforcement learning can overcome the limits of current ML systems.

But before it does, RL must overcome its own set of challenges that limit its use in real-world settings.

We could think of modern RL research as consisting of three threads: (1) getting good results in simulated benchmarks (e.g., video games); (2) using simulation+ transfer; (3) running RL in the real world, Levine told TechTalks. I believe that ultimately (3) is the most importantthing, because thats the most promising approach to solve problems that we cant solve today.

Games are simple environments. Board games such as chess and go are closed worlds with deterministic environments. Even games such as StarCraft and Dota, which are played in real-time and have near unlimited states, are much simpler than the real world. Their rules dont change. This is partly why game-playing AI systems have found very few applications in the real world.

On the other hand, physics simulators have seen tremendous advances in recent years. One of the popular methods in fields such as robotics and self-driving cars has been to train reinforcement learning models in simulated environments and then finetune the models with real-world experience. But as Levine explained, this approach is limited too because the domains where we most need learningthe ones where humans far outperform machinesare also the ones that are hardest to simulate.

This approach is only effective at addressing tasks that can be simulated, which is bottlenecked by our ability to create lifelike simulated analogues of the real world and to anticipate all the possible situations that an agent might encounter in reality, Levine said.

One of the biggest challenges we encounter when we try to do real-world RL is generalization, Levine said.

For example, in 2016, Levine was part of a team that constructed an arm farm at Google with 14 robots all learning concurrently from their shared experience. They collected more than half a million grasp attempts, and it was possible to learn effective grasping policies in this way.

But we cant repeat this process for every single task we want robots to learn with RL, he says. Therefore, we need more general-purpose approaches, where a single ever-growing dataset is used as the basis for a general understanding of the world on which more specific skills can be built.

In his paper, Levine points to two key obstacles in reinforcement learning. First, RL systems require manually defined reward functions or goals before they can learn the behaviors that help accomplish those goals. And second, reinforcement learning requires online experience and is not data-driven, which makes it hard to train them on large datasets. Most recent accomplishments in RL have relied on engineers at very wealthy tech companies using massive compute resources to generate immense experiences instead of reusing available data.

Therefore, RL systems need solutions that can learn from past experience and repurpose their learnings in more generalized ways. Moreover, they should be able to handle the continuity of the real world. Unlike simulated environments, you cant reset the real world and start everything from scratch. You need learning systems that can quickly adapt to the constant and unpredictable changes to their environment.

In his NeurIPS talk, Levine compares real-world RL to the story of Robinson Crusoe, the story of a man who is stranded on an island and learns to deal with unknown situations through inventiveness and creativity, using his knowledge of the world and continued exploration in his new habitat.

RL systems in the real world have to deal with a lifelong learning problem, evaluate objectives and performance based entirely on realistic sensing without access to privileged information, and must deal with real-world constraints, including safety, Levine said. These are all things that are typically abstracted away in widely used RL benchmark tasks and video game environments.

However, RL does work in more practical real-world settings, Levine says. For example, in 2018, he and his colleagues an RL-based robotic grasping system attain state-of-the-art results with raw sensory perception. In contrast to static learning behaviors that choose a grasp point and then execute the desired grasp, in their method, the robot continuously updated its grasp strategy based on the most recent observations to optimize long-horizon grasp success.

To my knowledge this is still the best existing system for grasping from monocular RGB images, Levine said. But this sort of thing requires algorithms that are somewhat different from those that perform best in simulated video game settings: it requires algorithms that are adept at utilizing and reusing previously collected data, algorithms that can train large models that generalize, and algorithms that can support large-scale real-world data collection.

Levines reinforcement learning solution includes two key components: unsupervised/self-supervised learning and offline learning.

In his paper, Levine describes self-supervised reinforcement learning as a system that can learn behaviors that control the world in meaningful ways and provides some mechanism to learn to control [the world] in as many ways as possible.

Basically, this means that instead of being optimized for a single goal, the RL agent should be able to achieve many different goals by computing counterfactuals, learning causal models, and obtaining a deep understanding of how actions affect its environment in the long term.

However, creating self-supervised RL models that can solve various goals would still require a massive amount of experience. To address this challenge, Levine proposes offline reinforcement learning, which makes it possible for models to continue learning from previously collected data without the need for continued online experience.

Offline RL can make it possible to apply self-supervised or unsupervised RL methods even in settings where online collection is infeasible, and such methods can serve as one of the most powerful tools for incorporating large and diverse datasets into self-supervised RL, he writes.

The combination of self-supervised and offline RL can help create agents that can create building blocks for learning new tasks and continue learning with little need for new data.

This is very similar to how we learn in the real world. For example, when you want to learn basketball, you use basic skills you learned in the past such as walking, running, jumping, handling objects, etc. You use these capabilities to develop new skills such as dribbling, crossovers, jump shots, free throws, layups, straight and bounce passes, eurosteps, dunks (if youre tall enough), etc. These skills build on each other and help you reach the bigger goal, which is to outscore your opponent. At the same time, you can learn from offline data by reflecting on your past experience and thinking about counterfactuals (e.g., what would have happened if you passed to an open teammate instead of taking a contested shot). You can also learn by processing other data such as videos of yourself and your opponents. In fact, on-court experience is just part of your continuous learning.

Ina paper, Yevgen Chetobar, one of Levines colleagues, shows how self-supervised offline RL can learn policies for fairly general robotic manipulation skills, directly reusing data that they had collected for another project.

This system was able to reach a variety of user-specified goals, and also act as a general-purpose pretraining procedure (a kind of BERT for robotics) for other kinds of tasks specified with conventional reward functions, Levine said.

One of the great benefits of offline and self-supervised RL is learning from real-world data instead of simulated environments.

Basically, it comes down to this question: is it easier to create a brain, or is it easier to create the universe? I think its easier to create a brain, because it is part of the universe, he said.

This is, in fact, one of the great challenges engineers face when creating simulated environments. For example, Levine says, effective simulation for autonomous driving requires simulating other drivers, which requires having an autonomous driving system, which requires simulating other drivers, which requires having an autonomous driving system, etc.

Ultimately, learning from real data will be more effective because it will simply be much easier and more scalable, just as weve seen in supervised learning domains in computer vision and NLP, where no one worries about using simulation, he said. My perspective is that we should figure out how to do RL in a scalable and general-purpose way using real data, and this will spare us from having to expend inordinate amounts of effort building simulators.

See the article here:
Reinforcement learning for the real world - TechTalks

Artificial Intelligence and Sophisticated Machine Learning Techniques are Being Used to Develop Pathogenesi… – Physician’s Weekly

Most scientific areas now use big data analysis to extract knowledge from complicated and massive databases. This method is now utilized in medicine to investigate big groups of individuals. This review helped to understand that the employed artificial intelligence and sophisticated machine learning approaches to investigate physio pathogenesis-based therapy in pSS. The procedure also estimated the evolution of trends in statistical techniques, cohort sizes, and the number of publications throughout this time span. In all, 44,077 abstracts and 1,017 publications were reviewed. The mean number of chosen articles each year was 101.0 (S.D. 19.16), but it climbed dramatically with time (from 74 articles in 2008 to 138 in 2017). Only 12 of them focused on pSS, but none on the topic of pathogenesis-based therapy. A thorough assessment of the literature over the last decade collected all papers reporting on the application of sophisticated statistical analysis in the study of systemic autoimmune disorders (SADs). To accomplish this job, an automatic bibliography screening approach has been devised.To summarize, whereas medicine is gradually entering the era of big data analysis and artificial intelligence, these techniques are not yet being utilized to characterize pSS-specific pathogenesis-based treatment. Nonetheless, big multicenter studies using advanced algorithmic methods on large cohorts of SADs patients are studying this feature.

Reference:www.tandfonline.com/doi/full/10.1080/21645515.2018.1475872

See the original post:
Artificial Intelligence and Sophisticated Machine Learning Techniques are Being Used to Develop Pathogenesi... - Physician's Weekly

Machine learning implemented by 68 percent of organizations – BetaNews

New research shows that 68 percent of chief technical officers (CTOs) have implemented machine learning at their company.

What's more the study, from software development company STX Next, reveals that 55 percent of businesses now employ at least one team member dedicated to AI/ML solutions, although only 15 percent have their own separate AI division.

The findings come from STX Next's 2021 Global CTO Survey, which gathers insights from 500 global CTOs about their organisation's tech stack and what they're looking to add to it in the future. It shows that 72 percent of respondents identify machine learning as the most likely technology to come to prominence in the next two to four years, with 57 percent predicting the same for cloud computing.

In addition 25 percent of CTOs report that they've implemented natural language processing, with 22 percent implementing pattern recognition and 21 percent applying deep learning technologies. 87 percent of businesses employ up to five people in a dedicated AI, machine learning or data science capacity.

ukasz Grzybowski, head of machine learning and data engineering at STX Next, says:

The implementation of AI and its subsets in many companies is still in its early stages, as evidenced by the prevalence of small AI teams.

It's unsurprising to see machine learning as a definite leader when it comes to future technologies as its applications are becoming more widespread every day. What's less obvious is the skills that people will need to take full advantage of its growth and face the challenges that will arise alongside it. It's important that CTOs and other leaders are wise to these challenges, and are willing to take the steps to increase their AI expertise in order to maintain their innovative edge.

Deep learning is a good example of where there is plenty of room for progress to be made. It is one of the fastest developing areas of AI, in particular when it comes to its application in natural language processing, natural language understanding, chatbots, and computer vision. Many innovative companies are trying to use deep learning to process unstructured data such as images, sounds, and text.

However, AI is still most commonly used to process structured data, which is evidenced by the high popularity of classical machine learning methods such as linear or logistic regression and decision trees.

The full report is available from the STX Next site.

Image credit:Jirsak/depositphotos.com

See the article here:
Machine learning implemented by 68 percent of organizations - BetaNews

How Machine Learning is Impacting the Finance Industry – BBN Times

Machine learning isstreamlining and optimizing processesranging from credit decisions to quantitative trading and financial risk management.

This exciting technology has the potential to transform financial services business models and markets for trading, credit and blockchain-based finance, reduce friction and enhance product offerings.

Machine learningis a subset of artificial intelligence that utilizes advanced statistical techniques to enable computing systems toimprove at tasks with experience over time. Chatbots like Amazons Alexa and Apples Siri improve every year thanks to constant use by consumers coupled with the machine learning that takes place in the background.

Machine learning has grown substantially within the finance industry, enabled by the abundance of available data and the increase in the affordability of computing capacity.

The technology is increasingly deployed by financial services organizations in the following areas:

Machine learning in finance is creating a huge impact; lets take a look how.

Gone are the days when financial services only meant saving money in the bank or taking a loan from it. Machine learning expands the gamut of financial services by means of what are called as consumer financial services. Consumer financial services keep the consumers and their unique demands at the core of their highly optimized offerings. Machine learning makes it possible to provide consumers with a personal financial concierge that automatically lets you decide a suitable style of spending, saving, and investing that are based on your personal habits and goals. With machine learning in finance, its possible to create intelligent products that can learn from your financial data and determine whats working for you and whats not, and help you track your financial activities better.

This is something we all must have experienced and would, therefore, agree with. Machine learning in finance has automated processes and drastically reduced the cost of serving customers. While machine learning has, on one hand, reduced the cost of financial services, on the other, it has made financing extremely convenient to avail. Through various digital servicing channels, Machine learning is proving effective in attracting that large section of the population to financial services, which previously found them cumbersome, expensive, and time-consuming.

Machine learning in finance is opening up new avenues for banking and insurance leaders to seek advice. No more are financial experts limited to human opinions in order to make forecasts or recommendations in the field of finance. Withmachine learning in finance, these leaders can now ask machines questions that are pertinent to their business and these machines can, in turn, analyze data and help them take data-driven management decisions. As far as consumers are concerned, they can have their financial portfolio managed at essentially no management fee and with high efficiency, as opposed to availing the services of a traditional advisor who may charge around 1% of your investments.

With machine learning, it is possible to simulate umpteen situations where a fraud or cyber crime may occur. Machine learning in finance, therefore, follows a proactive approach to making the financial services environment safe and breach-proof. Unlike before, designers of a financial service system do not need to wait for an incidence of fraud to be detected and then secure a system. Machine learning is helping the field of finance innovate freely by securing its products and services through a continuous understanding of human psychology. Besides,machine learning in finance also helps keep a strict regulatory oversight. Machine learning ensures that all policies, regulations, and security measures are being sincerely followed while designing and delivering any financial service.

Critical decisions in fields like finance cannot afford to be marred by the inaccuracy involved in human decisions. Machine learning in finance implies thorough research, understanding, and learning over long periods of time and vast volumes of data. Machine learning introduces automation in areas that require high degrees of incisiveness thereby, safeguarding the trust of consumers.

Machine learning is all about continuous learning and re-learning of patterns, data, and developments in the financial world.

It gives financial organizations more flexibility to build upon their current systems, products and services.

Successful banking-related chatbot interactions will grow 3,1505% between 2019-2023.

826 million hours will be saved by banks through chatbot interactions in2023.

79% of successful chatbot interactions will be through mobile banking apps in 2023.

Read more here:
How Machine Learning is Impacting the Finance Industry - BBN Times

Email Security Market : Rise in adoption of artificial intelligence and machine learning by large enterprises is estimated to drive market – Digital…

Email security is a secure email communication technique to transfer and access sensitive information against unauthorized loss and compromise. Increase in digitization and adoption of cloud email services in different industries to reorganize the email security architecture of companies is expected to fuel the adoption of email security solutions in companies.

Adoption of email security solutions in enterprises eliminates the need for expensive security solutions providers, which further improves phishing detection and provides good customer experience. Adoption of email security solutions in enterprises is increasing consistently to reduce the workload of the IT department and to minimize manual management of email security in enterprises in order to block threats. This factor is expected to drive theemail security marketduring the forecast period.

Increase in investment in R&D activities and high rate of adoption of cloud-based technology to store and secure the high amount of data generated by governments and various industries is projected to drive the market during the forecast period. Rise in adoption ofartificial intelligence(AI) andmachine learning(ML) by large enterprises and SMEs to provide better security experience to customers is estimated to boost the demand for email security during the forecast period

Request Brochure of Report https://www.transparencymarketresearch.com/sample/sample.php?flag=B&rep_id=84147

High implementation cost of email security solutions restrains the market. Lack of awareness about email security solutions among enterprises further hinders the email security market. Increase in adoption of email protection solutions to fulfil the requirements of managed security providers (MSP) creates significant opportunities for the email security market

Impact of COVID-19 on the Global Email Security Market

North America to Hold Major Share of Global Email Security Market

Global Email Security Market: Competition Landscape

PreBook Our Premium Research Report at https://www.transparencymarketresearch.com/checkout.php?rep_id=84147&ltype=S

Key Players Operating in Global Email Security Market Include:

More Trending Reports by Transparency Market Research https://www.prnewswire.co.uk/news-releases/cyber-security-consulting-market-to-surpass-us-28-22-bn-by-2031-tmr-study-843678376.html

bout Us:

Transparency Market Researchis a global market intelligence company, providing global business information reports and services. Our exclusive blend of quantitative forecasting and trends analysis provides forward-looking insight for thousands of decision makers. Our experienced team of Analysts, Researchers, and Consultants, use proprietary data sources and various tools and techniques to gather, and analyse information.Now avail flexible Research Subscriptions, and access Research multi-format through downloadable databooks, infographics, charts, interactive playbook for data visualization and full reports throughMarketNgage, the unified market intelligence engine.Sign Upfor a 7 day free trial!

Contact

Transparency Market Research,

90 State Street, Suite 700,

Albany, NY 12207

Tel: +1-518-618-1030

USA Canada Toll Free: 866-552-3453

Email:sales@transparencymarketresearch.com

Website:https://www.transparencymarketresearch.com/

Read more:
Email Security Market : Rise in adoption of artificial intelligence and machine learning by large enterprises is estimated to drive market - Digital...