Archive for the ‘Machine Learning’ Category

Machine Learning Tutorial for Beginners – Guru99

What is Machine Learning?

Machine Learning is a system that can learn from example through self-improvement and without being explicitly coded by programmer. The breakthrough comes with the idea that a machine can singularly learn from the data (i.e., example) to produce accurate results.

Machine learning combines data with statistical tools to predict an output. This output is then used by corporate to makes actionable insights. Machine learning is closely related to data mining and Bayesian predictive modeling. The machine receives data as input, use an algorithm to formulate answers.

A typical machine learning tasks are to provide a recommendation. For those who have a Netflix account, all recommendations of movies or series are based on the user's historical data. Tech companies are using unsupervised learning to improve the user experience with personalizing recommendation.

Machine learning is also used for a variety of task like fraud detection, predictive maintenance, portfolio optimization, automatize task and so on.

In this basic tutorial, you will learn-

Traditional programming differs significantly from machine learning. In traditional programming, a programmer code all the rules in consultation with an expert in the industry for which software is being developed. Each rule is based on a logical foundation; the machine will execute an output following the logical statement. When the system grows complex, more rules need to be written. It can quickly become unsustainable to maintain.

Machine learning is supposed to overcome this issue. The machine learns how the input and output data are correlated and it writes a rule. The programmers do not need to write new rules each time there is new data. The algorithms adapt in response to new data and experiences to improve efficacy over time.

Machine learning is the brain where all the learning takes place. The way the machine learns is similar to the human being. Humans learn from experience. The more we know, the more easily we can predict. By analogy, when we face an unknown situation, the likelihood of success is lower than the known situation. Machines are trained the same. To make an accurate prediction, the machine sees an example. When we give the machine a similar example, it can figure out the outcome. However, like a human, if its feed a previously unseen example, the machine has difficulties to predict.

The core objective of machine learning is the learning and inference. First of all, the machine learns through the discovery of patterns. This discovery is made thanks to the data. One crucial part of the data scientist is to choose carefully which data to provide to the machine. The list of attributes used to solve a problem is called a feature vector. You can think of a feature vector as a subset of data that is used to tackle a problem.

The machine uses some fancy algorithms to simplify the reality and transform this discovery into a model. Therefore, the learning stage is used to describe the data and summarize it into a model.

For instance, the machine is trying to understand the relationship between the wage of an individual and the likelihood to go to a fancy restaurant. It turns out the machine finds a positive relationship between wage and going to a high-end restaurant: This is the model

When the model is built, it is possible to test how powerful it is on never-seen-before data. The new data are transformed into a features vector, go through the model and give a prediction. This is all the beautiful part of machine learning. There is no need to update the rules or train again the model. You can use the model previously trained to make inference on new data.

The life of Machine Learning programs is straightforward and can be summarized in the following points:

Once the algorithm gets good at drawing the right conclusions, it applies that knowledge to new sets of data.

Machine learning can be grouped into two broad learning tasks: Supervised and Unsupervised. There are many other algorithms

An algorithm uses training data and feedback from humans to learn the relationship of given inputs to a given output. For instance, a practitioner can use marketing expense and weather forecast as input data to predict the sales of cans.

You can use supervised learning when the output data is known. The algorithm will predict new data.

There are two categories of supervised learning:

Imagine you want to predict the gender of a customer for a commercial. You will start gathering data on the height, weight, job, salary, purchasing basket, etc. from your customer database. You know the gender of each of your customer, it can only be male or female. The objective of the classifier will be to assign a probability of being a male or a female (i.e., the label) based on the information (i.e., features you have collected). When the model learned how to recognize male or female, you can use new data to make a prediction. For instance, you just got new information from an unknown customer, and you want to know if it is a male or female. If the classifier predicts male = 70%, it means the algorithm is sure at 70% that this customer is a male, and 30% it is a female.

The label can be of two or more classes. The above example has only two classes, but if a classifier needs to predict object, it has dozens of classes (e.g., glass, table, shoes, etc. each object represents a class)

When the output is a continuous value, the task is a regression. For instance, a financial analyst may need to forecast the value of a stock based on a range of feature like equity, previous stock performances, macroeconomics index. The system will be trained to estimate the price of the stocks with the lowest possible error.

In unsupervised learning, an algorithm explores input data without being given an explicit output variable (e.g., explores customer demographic data to identify patterns)

You can use it when you do not know how to classify the data, and you want the algorithm to find patterns and classify the data for you

Type

K-means clustering

Puts data into some groups (k) that each contains data with similar characteristics (as determined by the model, not in advance by humans)

Clustering

Gaussian mixture model

A generalization of k-means clustering that provides more flexibility in the size and shape of groups (clusters

Clustering

Hierarchical clustering

Splits clusters along a hierarchical tree to form a classification system.

Can be used for Cluster loyalty-card customer

Clustering

Recommender system

Help to define the relevant data for making a recommendation.

Clustering

PCA/T-SNE

Mostly used to decrease the dimensionality of the data. The algorithms reduce the number of features to 3 or 4 vectors with the highest variances.

Dimension Reduction

There are plenty of machine learning algorithms. The choice of the algorithm is based on the objective.

In the example below, the task is to predict the type of flower among the three varieties. The predictions are based on the length and the width of the petal. The picture depicts the results of ten different algorithms. The picture on the top left is the dataset. The data is classified into three categories: red, light blue and dark blue. There are some groupings. For instance, from the second image, everything in the upper left belongs to the red category, in the middle part, there is a mixture of uncertainty and light blue while the bottom corresponds to the dark category. The other images show different algorithms and how they try to classified the data.

The primary challenge of machine learning is the lack of data or the diversity in the dataset. A machine cannot learn if there is no data available. Besides, a dataset with a lack of diversity gives the machine a hard time. A machine needs to have heterogeneity to learn meaningful insight. It is rare that an algorithm can extract information when there are no or few variations. It is recommended to have at least 20 observations per group to help the machine learn. This constraint leads to poor evaluation and prediction.

Augmentation:

Automation:

Finance Industry

Government organization

Healthcare industry

Marketing

Example of application of Machine Learning in Supply Chain

Machine learning gives terrific results for visual pattern recognition, opening up many potential applications in physical inspection and maintenance across the entire supply chain network.

Unsupervised learning can quickly search for comparable patterns in the diverse dataset. In turn, the machine can perform quality inspection throughout the logistics hub, shipment with damage and wear.

For instance, IBM's Watson platform can determine shipping container damage. Watson combines visual and systems-based data to track, report and make recommendations in real-time.

In past year stock manager relies extensively on the primary method to evaluate and forecast the inventory. When combining big data and machine learning, better forecasting techniques have been implemented (an improvement of 20 to 30 % over traditional forecasting tools). In term of sales, it means an increase of 2 to 3 % due to the potential reduction in inventory costs.

Example of Machine Learning Google Car

For example, everybody knows the Google car. The car is full of lasers on the roof which are telling it where it is regarding the surrounding area. It has radar in the front, which is informing the car of the speed and motion of all the cars around it. It uses all of that data to figure out not only how to drive the car but also to figure out and predict what potential drivers around the car are going to do. What's impressive is that the car is processing almost a gigabyte a second of data.

Machine learning is the best tool so far to analyze, understand and identify a pattern in the data. One of the main ideas behind machine learning is that the computer can be trained to automate tasks that would be exhaustive or impossible for a human being. The clear breach from the traditional analysis is that machine learning can take decisions with minimal human intervention.

Take the following example; a retail agent can estimate the price of a house based on his own experience and his knowledge of the market.

A machine can be trained to translate the knowledge of an expert into features. The features are all the characteristics of a house, neighborhood, economic environment, etc. that make the price difference. For the expert, it took him probably some years to master the art of estimate the price of a house. His expertise is getting better and better after each sale.

For the machine, it takes millions of data, (i.e., example) to master this art. At the very beginning of its learning, the machine makes a mistake, somehow like the junior salesman. Once the machine sees all the example, it got enough knowledge to make its estimation. At the same time, with incredible accuracy. The machine is also able to adjust its mistake accordingly.

Most of the big company have understood the value of machine learning and holding data. McKinsey have estimated that the value of analytics ranges from $9.5 trillion to $15.4 trillion while $5 to 7 trillion can be attributed to the most advanced AI techniques.

Continued here:
Machine Learning Tutorial for Beginners - Guru99

Machine Learning – India | IBM

Machine-learning techniques are required to improve the accuracy of predictive models. Depending on the nature of the business problem being addressed, there are different approaches based on the type and volume of the data. In this section, we discuss the categories of machine learning.

Supervised learning

Supervised learning typically begins with an established set of data and a certain understanding of how that data is classified. Supervised learning is intended to find patterns in data that can be applied to an analytics process. This data has labeled features that define the meaning of data. For example, you can create a machine-learning application that distinguishes between millions of animals, based onimages and written descriptions.

Unsupervised learning

Unsupervised learning is used when the problem requires a massive amount of unlabeled data. For example, social media applications, such as Twitter, Instagram and Snapchat, all have large amounts of unlabeled data. Understanding the meaning behind this data requires algorithms that classify the data based on the patterns or clusters it finds. Unsupervised learning conducts an iterative process, analyzing data without human intervention. It is used with email spam-detecting technology. There are far too many variables in legitimate and spam emails for an analyst to tag unsolicited bulk email. Instead, machine-learning classifiers, based on clustering and association, are applied to identify unwanted email.

Reinforcement learning

Reinforcement learning is a behavioral learning model. The algorithm receives feedback from the data analysis, guiding the user to the best outcome. Reinforcement learning differs from other types of supervised learning, because the system isnt trained with the sample data set. Rather, the system learns through trial and error. Therefore, a sequence of successful decisions will result in the process being reinforced, because it best solves the problem at hand.

Deep learning

Deep learning is a specific method of machine learning that incorporates neural networks in successive layers to learn from data in an iterative manner. Deep learning is especially useful when youre trying to learn patterns from unstructured data. Deep learning complex neural networks are designed to emulate how the human brain works, so computers can be trained to deal with poorly defined abstractions and problems. The average five-year-old child can easily recognize the difference between his teachers face and the face of the crossing guard. In contrast, the computer must do a lot of work to figure out who is who. Neural networks and deep learning are often used in image recognition, speech, and computer vision applications.

See the original post here:
Machine Learning - India | IBM

Microsoft and Udacity partner in new $4 million machine-learning scholarship program for Microsoft Azure – TechRepublic

Applications are now open for the nanodegree program, which will help Udacity train developers on the Microsoft Azure cloud infrastructure.

Microsoft and Udacity are teaming together to invest $4 million in a machine learning (ML) training collaboration, which begins with the Machine Learning Scholarship Program for Microsoft Azure which starts today.

The program focuses on artificial intelligence, which is continuing to grow at a face pace. AI engineers are in high demand, particularly as enterprises build new cloud applications and move old ones to the cloud. The average AI salary in the US is $114,121 a year based on data from Glassdoor.

"AI is driving transformation across organizations and there is increased demand for data science skills," said Julia White, corporate vice president, Azure Marketing, Microsoft, in a Microsoft blog post. "Through our collaboration with Udacity to offer low-code and advanced courses on Azure Machine Learning, we hope to expand data science expertise as experienced professionals will truly be invaluable resources to solving business problems."

SEE: Building the bionic brain (free PDF) (TechRepublic)

The interactive scholarship courses begin with a two-month long course, "Introduction to machine learning on Azure with a low-code experience."

Students will work with live Azure environments directly within the Udacity classroom and build on these foundations with advanced techniques such as ensemble learning and deep learning.

To earn a spot in th foundations course, students will need to submit an application. According to the blog post, "Successful applicants will ideally have basic programming knowledge in any language, preferably Python, and be comfortable writing scripts and performing loop operations."

Udacity's nanodegrees have been growing in popularity. Monthly enrollment in Udacity's nanodegrees has increased by a factor of four since the beginning of the coronavirus lockdown. Among Udacity's consumer customers, in the three weeks starting March 9 the company saw a 56% jump in weekly active users and a 102% increase in new enrollments, and they've stayed at or just below those new levels since then, according to a Udacity spokesperson.

After students complete the foundations course, Udacity will select top performers to receive a scholarship to the new machine learning nanodegree program with Microsoft Azure.

This typically four-month nanodegree program will include:

Students who aren't selected for the scholarship will still be able to enroll in the nanodegree program when it is available to the general public.

Anyone interested in becoming an Azure Machine Learning engineer and learning from experts at the forefront of the field can apply for the scholarshiphere.Applications will be open from June 10 to June 30.

We deliver the top business tech news stories about the companies, the people, and the products revolutionizing the planet. Delivered Daily

Image: NicolasMcComber / Getty Images

See the original post here:
Microsoft and Udacity partner in new $4 million machine-learning scholarship program for Microsoft Azure - TechRepublic

6 ways to reduce different types of bias in machine learning – TechTarget

As companies step up the use of machine learning-enabled systems in their day-to-day operations, they become increasingly reliant on those systems to help them make critical business decisions. In some cases, the machine learning systems operate autonomously, making it especially important that the automated decision-making works as intended.

However, machine learning-based systems are only as good as the data that's used to train them. If there are inherent biases in the data used to feed a machine learning algorithm, the result could be systems that are untrustworthy and potentially harmful.

In this article, you'll learn why bias in AI systems is a cause for concern, how to identify different types of biases and six effective methods for reducing bias in machine learning.

The power of machine learning comes from its ability to learn from data and apply that learning experience to new data the systems have never seen before. However, one of the challenges data scientists have is ensuring that the data that's fed into machine learning algorithms is not only clean, accurate and -- in the case of supervised learning, well-labeled -- but also free of any inherently biased data that can skew machine learning results.

The power of supervised learning, one of the core approaches to machine learning, in particular depends heavily on the quality of the training data. So it should be no surprise that when biased training data is used to teach these systems, the results are biased AI systems. Biased AI systems that are put into implementation can cause problems, especially when used in automated decision-making systems, autonomous operation, or facial recognition software that makes predictions or renders judgment on individuals.

Some notable examples of the bad outcomes caused by algorithmic bias include: a Google image recognition system that misidentified images of minorities in an offensive way; automated credit applications from Goldman Sachs that have sparked an investigation into gender bias; and a racially biased AI program used to sentence criminals. Enterprises must be hyper-vigilant about machine learning bias: Any value delivered by AI and machine learning systems in terms of efficiency or productivity will be wiped out if the algorithms discriminate against individuals and subsets of the population.

However, AI bias is not only limited to discrimination against individuals. Biased data sets can jeopardize business processes when applied to objects and data of all types. For example, take a machine learning model that was trained to recognize wedding dresses. If the model was trained using Western data, then wedding dresses would be categorized primarily by identifying shades of white. This model would fail in non-Western countries where colorful wedding dresses are more commonly accepted. Errors also abound where data sets have bias in terms of the time of day when data was collected, the condition of the data and other factors.

All of the examples described above represent some sort of bias that was introduced by humans as part of their data selection and identification methods for training the machine learning model. Because the systems technologists build are necessarily colored by their own experiences, they must be very aware that their individual biases can jeopardize the quality of the training data. Individual bias, in turn, can easily become a systemic bias as bad predictions and unfair outcomes are automated.

Part of the challenge of identifying bias is due to the difficulty of seeing how some machine learning algorithms generalize their learning from the training data. In particular, deep learning algorithms have proven to be remarkably powerful in their capabilities. This approach to neural networks leverages large quantities of data, high performance compute power and a sophisticated approach to efficiency, resulting in machine learning models with profound abilities.

Deep learning, however, is a "black box." It's not clear how an individual decision was arrived at by the neural network predictive model. You can't simply query the system and determine with precision which inputs resulted in which outputs. This makes it hard to spot and eliminate potential biases when they arise in the results. Researchers are increasingly turning their focus on adding explainability to neural networks. Verification is the process of proving the properties of neural networks. However, because of the size of neural networks, it can be hard to check them for bias.

Until we have truly explainable systems, we must understand how to recognize and measure AI bias in machine learning models. Some of the biases in the data sets arise from the selection of training data sets. The model needs to represent the data as it exists in the real world. If your data set is artificially constrained to a subset of the population, you will get skewed results in the real world, even if it performs very well against training data. Likewise, data scientists must take care in how they select which data to include in a training data set and which features or dimensions are included in the data for machine learning training.

Companies are combating inherent data bias by implementing programs to not only broaden the diversity of their data sets, but also the diversity of their teams. More diversity on teams means that people of many perspectives and varied experiences are feeding systems the data points to learn from. Unfortunately, the tech industry today is very homogeneous; there are not many women or people of color in the field. Efforts to diversify teams should also have a positive impact on the machine learning models produced, since data science teams will be better able to understand the requirements for more representative data sets.

There are a few sources for the bias that can have an adverse impact on machine learning models. Some of these are represented in the data that is collected and others in the methods used to sample, aggregate, filter and enhance that data.

There are no doubt other types of bias that might be represented in the data set than just the ones listed above, and all those forms should be identified early in the machine learning project.

1. Identify potential sources of bias. Using the above sources of bias as a guide, one way to address and mitigate bias is to examine the data and see how the different forms of bias could impact the data being used to train the machine learning model. Have you selected the data without bias? Have you made sure there isn't any bias arising from errors in data capture or observation? Are you making sure not to use an historic data set tainted with prejudice or confirmation bias? By asking these questions you can help to identify and potentially eliminate that bias.

2. Set guidelines and rules for eliminating bias and procedures. To keep bias in check, organizations should set guidelines, rules and procedures for identifying, communicating and mitigating potential data set bias. Forward-thinking organizations are documenting cases of bias as they occur, outlining the steps taken to identify bias, and explaining the efforts taken to mitigate bias. By establishing these rules and communicating them in an open, transparent manner, organizations can put the right foot forward to address issues of machine learning model bias.

3. Identify accurate representative data. Prior to collecting and aggregating data for machine learning model training, organizations should first try to understand what a representative data set should look like. Data scientists should use their data analysis skills to understand the nature of the population that is to be modeled along with the characteristics of the data used to create the machine learning model. These two things should match in order to build a data set with as little bias as possible.

4. Document and share how data is selected and cleansed. Many forms of bias occur when selecting data from among large data sets and during data cleansing operations. In order to make sure few bias-inducing mistakes are made, organizations should document their methods of data selection and cleansing and allow others to examine when and if the models exhibit any form of bias. Transparency allows for root-cause analysis of sources of bias to be eliminated in future model iterations.

5. Evaluate model for performance and select least-biased, in addition to performance. Machine learning models are often evaluated prior to being placed into operation. Most of the time these evaluation steps focus on aspects of model accuracy and precision. Organizations should also add measures of bias detection in their model evaluation steps. Even if the model performs with certain levels of accuracy and precision for particular tasks, it could fail on measures of bias, which might point to issues with the training data.

6. Monitor and review models in operation. Finally, there is a difference between how the machine learning model performs in training and how it performs in the real world. Organizations should provide methods to monitor and continuously review the models as they perform in operation. If there are signs that certain forms of bias are showing up in the results, then the organization can take action before the bias causes irreparable harm.

When bias becomes embedded in machine learning models, it can have an adverse impact on our daily lives. The bias is exhibited in the form of exclusion, such as certain groups being denied loans or not being able to use the technology, or in the technology not working the same for everyone. As AI continues to become more a part of our lives, the risks from bias only grow larger. Companies, researchers and developers have a responsibility to minimize bias in AI systems. A lot of it comes down to ensuring that the data sets are representative and that the interpretation of data sets is correctly understood. However, just making sure that the data sets aren't biased won't actually remove bias, so having diverse teams of people working toward the development of AI remains an important goal for enterprises.

More:
6 ways to reduce different types of bias in machine learning - TechTarget

Breaking Down COVID-19 Models Limitations and the Promise of Machine Learning – EnterpriseAI

Every major news outlet offers updates on infections, deaths, testing, and other metrics related to COVID-19. They also link to various models, such as those on HealthData.org, from The Institute for Health Metrics and Evaluation (IHME), an independent global health research center at the University of Washington. Politicians, corporate executives, and other leaders rely on these models (and many others) to make important decisions about reopening local economies, restarting businesses, and adjusting social distancing guidelines. Many of these models possess a shortcomingthey are not built with machine learning and AI.

Predictions and Coincidence

Given the sheer numbers of scientists and data experts working on predictions about the COVID-19 pandemic, the odds favor someone being right. Like the housing crisis and other calamitous events in the U.S., someone took credit for predicting that exact event. However, its important to note the number of predictors. It creates a multiple hypothesis testing situation where the higher number of trials increases the chance of a result via coincidence.

This is playing out now with COVID-19, and we will see in the coming months many experts claiming they had special knowledge after their predictions proved true. There is a lot of time, effort, and money invested in projections, and the non-scientists involved are not as eager as the scientists to see validation and proof. AI and machine learning technologies need to step into this space to improve the odds that the right predictions were very educated projections based on data instead of coincidence.

Modeling Meets its Limits

The models predicting infection rates, total mortality, and intensive care capacity are simpler constructs. They are adjusted when the conditions on the ground materially change, such as when states reopen; otherwise, they remain static. The problem with such an approach lies partly in the complexity of COVID-19s different variables. These variables mean the results of typical COVID-19 projections do not have linear relationships with the inputs used to create them. AI comes into play here, due to its ability to ignore assumptions about the ways the predictors building the models might assist or ultimately influence the prediction.

Improving Models with Machine Learning

Machine Learning, which is one way of building AI systems, can better leverage more data sets and their interrelated connections. For example, socioeconomic status, gender, age, and health status can all inform these platforms to determine how the virus relates to current and future mortality and infections. Its enabling a granular approach to review the impacts of the virus for smaller groups who might be in age group A and geographic area Z while also having a preexisting condition X that puts people in a higher COVID-19 risk group. Pandemic planners can use AI in a similar way as financial services and retail firms leverage personalized predictions to suggest things for people to buy as well as risk and credit predictions.

Community leaders need this detail to make more informed decisions about opening regional economies and implementing plans to better protect high-risk groups. On the testing front, AI is vital for producing quality data that are specific for a city or state and takes into account more than just basic demographics, but also more complex individual-based features.

Variations in testing rules across the states require adjusting models to account for different data types and structures. Machine learning is well suited to manage these variations. The complexity of modeling testing procedures means true randomization is essential for determining the most accurate estimates of infection rates for a given area.

The Automation Advantage

The pandemic hit with crushing speed, and the scientific community has tried to quickly react. Enabling faster movement with modeling, vaccine development, and drug trials is possible with automated AI and machine learning platforms. Automation removes manual processes from the scientists day, giving them time to focus on the core of their work, instead of mundane tasks.

According to a study titled Perceptions of scientific research literature and strategies for reading papers depend on academic career stage, scientists spend a considerable amount of time reading. It states, Engaging with the scientific literature is a key skill for researchers and students on scientific degree programmes; it has been estimated that scientists spend 23% of total work time reading. Various AI-driven platforms such as COVIDScholar use web scrapers to pull all new virus-related papers, and then machine learning is used to tag subject categories. The results are enhanced research capabilities that can then inform various models for vaccine development and other vital areas. AI is also pulling insights from research papers that are hidden from human eyes, such as the potential for existing medications as possible treatments for COVID-19 conditions.

Machine learning and AI can improve COVID-19 modeling as well as vaccine and medication development. The challenges facing scientists, doctors, and policy makers provide an opportunity for AI to accelerate various tasks and eliminate time-consuming practices. For example, researchers at the University of Chicago and Argonne National Laboratory collaborated to use AI to collect and analyze radiology images in order to better diagnose and differentiate the current infection stages for COVID-19 patients. The initiative provides physicians with a much faster way to assess patient conditions and then propose the right treatments for better outcomes. Its a simple example of AIs power to collect readily available information and turn it into usable insights.

Throughout the pandemic, AI is poised to provide scientists with improved models and predictions, which can then guide policymakers and healthcare professionals to make informed decisions. Better data quality through AI also creates strategies for managing a second wave or a future pandemic in the coming decades.

About the Author

PedroAlves is the founder and CEO of Ople.AI,a software startup that provides an Automated Machine Learning platform to empower business users with predictive analytics.

While pursuing his Ph.D. in ComputationalBiology from Yale University, Alves started his career as a data scientist and gained experience in predicting, analyzing, and visualizing data in the fields of social graphs, genomics, gene networks, cancer metastasis, insurance fraud, soccer strategies, joint injuries, human attraction, spam detection and topic modeling among others. Realizing that he was learning by observing how algorithms learn from processing different models, Alves discovered that data scientists could benefit from AI that mimics this behavior of learning to learn to learn. Therefore, he founded Ople to advance the field of data science and make AI easy, cheap, and ubiquitous.

Alves enjoys tackling new problems and actively participates in the AI community through projects, lectures, panels, mentorship, and advisory boards. He is extremely passionate about all aspects of AI and dreams of seeing it deliver on its promises; driven by Ople.

Related

Original post:
Breaking Down COVID-19 Models Limitations and the Promise of Machine Learning - EnterpriseAI