Archive for the ‘Machine Learning’ Category

California FEHC Proposes Sweeping Regulations Regarding Use of Artificial Intelligence and Machine Learning in Connection With Employment Decision…

The California Fair Employment and Housing Council (FEHC) recently took a major step towards regulating the use of artificial intelligence (AI) and machine learning (ML) in connection with employment decision-making. On March 15, 2022, the FEHC published Draft Modifications to Employment Regulations Regarding Automated-Decision Systems, which specifically incorporate the use of "automated-decision systems" in existing rules regulating employment and hiring practices in California.

The draft regulations seek to make unlawful the use of automated-decision systems that "screen out or tend to screen out" applicants or employees (or classes of applicants or employees) on the basis of a protected characteristic, unless shown to be job-related and consistent with business necessity. The draft regulations also contain significant and burdensome recordkeeping requirements.

Before the proposed regulations take effect, they will be subject to a 45-day public comment period (which has not yet commenced) before FEHC can move toward a final rulemaking.

"Automated-Decision Systems" are defined broadly

The draft regulations define "Automated-Decision Systems" broadly as "[a] computational process, including one derived from machine-learning, statistics, or other data processing or artificial intelligence techniques, that screens, evaluates, categorizes, recommends, or otherwise makes a decision or facilitates human decision making that impacts employees or applicants."

The draft regulations provide the following examples of Automated-Decision Systems:

Similarly, "algorithm" is broadly defined as "[a] process or set of rules or instructions, typically used by a computer, to make a calculation, solve a problem, or render a decision."

Notably, the scope of this definition is quite broad and will likely cover certain applications or systems that may only be tangentially related to employment decisions. For example, the term "or facilitates human decision making" is ambiguous. A broad reading of that term could potentially allow for the regulation of technologies designed to aid human decision-making in small or subtle ways.

The draft regulations would make it unlawful for any covered entity to use Automated-Decision Systems that "screen out or tend to screen out" applicants or employees on the basis of a protected characteristic, unless shown to be job-related and consistent with business necessity

The draft regulations would apply to employer (and covered third-party) decision-making throughout the employment lifecycle, from pre-employment recruitment and screening, through employment decisions including pay, advancement, discipline, and separation of employment. The draft regulations would incorporate the limitations on Automated-Decision Systems to apply to characteristics already protected under California law.

The precise scope and reach of the draft regulations are ambiguous in that key definitions define Automated-Decision Systems as those systems that screen out "or tend to screen out" applicants or employees on the basis of a protected characteristic. No clear explanation of the scope of the phrase "tend to screen out" is offered in the proposed regulations, and the inherent ambiguity of the language itself presents a real risk that these regulations will extend to certain systems or processes that are not involved in screening applicants or employees on the basis of a protected characteristic.

The draft regulations apply not just to employers, but also to "employment agencies," which could include vendors that provide AI/ML technologies to employers in connection with making employment decisions

The draft regulations apply not just to employers, but also to "covered entities," which include any "employment agency, labor organization[,] or apprenticeship training program." Notably, "employment agency" is defined to include, but is not limited to, "any person that provides automated-decision-making systems or services involving the administration or use of those systems on an employer's behalf."

Therefore, any third-party vendors that develop AI/ML technologies and sell those systems to third-parties using the technology for employment decisions are potentially liable if their automated-decision system screens out or tends to screen out an applicant or employee based on a protected characteristic.

The draft regulations require significant recordkeeping

Covered entities are required to maintain certain personnel or other employment records affecting any employment benefit or any applicant or employee. Under FEHC's draft regulations, those recordkeeping requirements would increase from two to four years. And, as relevant here, those records would include "machine-learning data."

Machine-learning data includes "all data used in the process of developing and/or applying machine-learning algorithms that are used as part of an automated-decision system." That definition expressly includes datasets used to train an algorithm. It also includes data provided by individual applicants or employees. And it includes the data produced from the application of an automated-decision system operation (i.e., the output from the algorithm).

Given the nature of algorithms and machine learning, that definition of machine-learning data could require an employer or vendor to preserve data provided to an algorithm not just four years looking backward, but to preserve all data (including training datasets) ever provided to an algorithm and extending for a period of four years after that algorithm's last use.

The regulations add that any person who engages in the advertisement, sale, provision, or use of a selection tool, including but not limited to an automated-decision system to an employer or other covered entity, must maintain records of "the assessment criteria used by the automated-decision system for each such employer or covered entity to whom the automated-decision system is provided."

Additionally, the draft regulations would add causes of action for aiding and abetting when a third party provides unlawful assistance, unlawful solicitation or encouragement, or unlawful advertising when that third party advertises, sells, provides, or uses an automated-decision system that limits, screens out, or otherwise unlawfully discriminates against applicants or employees based on protected characteristics.

Conclusion

The draft rulemaking is still in a public workshop phase, after which it will be subject to a 45-day public comment period, and it may undergo changes prior to its final implementation. Although the formal comment period has not yet opened, interested parties may submit comments now if desired.

Considering what we know about the potential for unintended bias in AI/ML, employers cannot simply assume that an automated-decision system produces objective or bias-free outcomes. Therefore, California employers are advised to:

See the article here:
California FEHC Proposes Sweeping Regulations Regarding Use of Artificial Intelligence and Machine Learning in Connection With Employment Decision...

Ambitions to become GitHub for machine learning? Hugging Face adds Decision Transformer to its library – Analytics India Magazine

Hugging Face is one of the most promising companies in the world. It has set to achieve a unique feat to become the GitHub for machine learning. Over the last few years, the company has open-sourced a number of libraries and tools, especially in the NLP space. Now, the company has integrated Decision Transformer, an offline reinforcement learning method, into the transformers library and the Hugging Face Hub.

Decision transformers were first introduced by Chen L. and his team in the paper Decision Transformer: Reinforcement Learning via Sequence Modelling. This paper introduced this framework that abstracts reinforcement learning as a sequence modelling problem. Unlike previous approaches, Decision Transformers output the optimal actions by leveraging a causally masked Transformer. A Decision Transformer can generate future actions that achieve desired return by conditioning an autoregressive model on desired reward, past states, and actions. The authors concluded that despite the simple design of this transformer, it matches, even exceeds, the performance of the state-of-art model and free offline reinforcement learning baselines on Atari, OpenAI Gym, and Key-to-Door tasks.

Decision Transformer architecture

The idea of using a sequence modelling algorithm is that instead of training a policy using reinforcement methods that would suggest action to maximise the return, Decision Transformers generate future actions based on a set of desired parameters. It is a shift in the reinforcement learning paradigm since the user is using a generative trajectory modelling to replace conventional reinforcement learning algorithms. The important steps involved in this are feeding the last K timesteps in the Decision Transformer with three inputs (return-to-go, state, action); embedding the tokens with a linear layer (if the state is a vector) or CNN encoder if it is a frame; processing the inputs by GPT-2 model that predicts future actions through autoregressive modelling.

Reinforcement learning is a framework to build decision making agents that learn optimal behaviour by interacting with the environment via trial and error method. The ultimate goal of an agent is to maximise the cumulative reward called return. One can say that reinforcement learning is based on the reward hypothesis and all the goals are the maximisation of the expected cumulative reward. Most reinforcement learning techniques are geared in the online learning setting, where the agents interact with the environment and gather information using current policy and exploration schemes to find higher-reward areas. The drawback with this method is that the agent has to be trained directly in the real world or have a simulator. In case a simulator is not available, one would be required to build it, which is a very complex process. Simulators may even have flaws that can be exploited by agents to gain a competitive advantage.

Credit: Hugging Face

This problem is present in the case of offline reinforcement learning. In this case, the agent only uses the data collected from other agents or human demonstrations without interacting with the environment. Offline reinforcement learning learns skills only from previously collected datasets without active environment interaction and provides a way to utilise previously collected datasets from sources like human demonstrations, prior experiments, and domain-specific solutions.

Hugging Faces startup journey has been nothing short of being phenomenal. The company, which started as a chatbot, has gained massive attention from the industry in a very short period; big companies like Apple, Monzo, and Bing use their libraries in production. Hugging Faces transformer library is backed by PyTorch and TensorFlow, and it offers thousands of pretrained models for tasks like text classification, summarisation, and information retrieval.

In September last year, the company released Datasets, a community library for contemporary NLP, which contains 650 unique datasets and more than 250 contributors. With Datasets, the company aims at standardising end-user interface, versioning, and documentation. This sits well with the companys larger vision of democratising AI, which would extend the benefits of emerging technologies to smaller technologies, which is otherwise concentrated in a few powerful hands.

The rest is here:
Ambitions to become GitHub for machine learning? Hugging Face adds Decision Transformer to its library - Analytics India Magazine

Worldwide Artificial Intelligence in HR Market to 2027 – Integration of Cloud and Mobile Deployment in HRM Systems Drives Growth – Yahoo Finance

Company Logo

Global Artificial Intelligence in HR Market

Global Artificial Intelligence in HR Market

Dublin, April 06, 2022 (GLOBE NEWSWIRE) -- The "Global Artificial Intelligence in HR Market (2022-2027) by Offering, Technology, Application, Industry and Geography, Competitive Analysis and the Impact of Covid-19 with Ansoff Analysis" report has been added to ResearchAndMarkets.com's offering.

The Global Artificial Intelligence in HR Market is estimated to be USD 3.89 Bn in 2022 and is expected to reach USD 17.61 Bn by 2027, growing at a CAGR of 35.26%.

Market Segmentation

The Global Artificial Intelligence in HR Market is segmented based on Offering, Technology, Application, Industry and Geography.

Offering, the market is classified into Hardware, Software, and Services.

Technology, the market is classified into Machine Learning, Natural Language Processing, Context-aware Computing, and Computer Vision.

Application, the market is classified into Recruitment, Performance Management, Retention, Payroll, Safety and Security, Regulatory Compliance, and Others.

Industry, the market is classified into Academic, BFSI, Government, Healthcare, IT & Telecom, Manufacturing, Retail, and Others.

Geography, the market is classified into Americas, Europe, Middle-East & Africa and Asia-Pacific.

Company Profiles

The report provides a detailed analysis of the competitors in the market. It covers the financial performance analysis for the publicly listed companies in the market. The report also offers detailed information on the companies' recent development and competitive scenario. Some of the companies covered in this report are Automatic Data Processing Inc., Ceredian HCM Inc., Cezanne Inc., etc.

Competitive Quadrant

The report includes Competitive Quadrant, a proprietary tool to analyze and evaluate the position of companies based on their Industry Position score and Market Performance score. The tool uses various factors for categorizing the players into four categories. Some of these factors considered for analysis are financial performance over the last 3 years, growth strategies, innovation score, new product launches, investments, growth in market share, etc.

Ansoff Analysis

Story continues

The report presents a detailed Ansoff matrix analysis for the Global Artificial Intelligence in HR Market. Ansoff Matrix, also known as Product/Market Expansion Grid, is a strategic tool used to design strategies for the growth of the company. The matrix can be used to evaluate approaches in four strategies viz. Market Development, Market Penetration, Product Development and Diversification. The matrix is also used for risk analysis to understand the risk involved with each approach.

The analyst analyses Global Artificial Intelligence in HR Market using the Ansoff Matrix to provide the best approaches a company can take to improve its market position.

Based on the SWOT analysis conducted on the industry and industry players, the analyst has devised suitable strategies for market growth.

Key Topics Covered:

1 Report Description

2 Research Methodology

3 Executive Summary3.1 Introduction3.2 Market Size, Segmentation, and Outlook

4 Market Dynamics4.1 Drivers4.1.1 Integration of Cloud and Mobile Deployment in HRM Systems 4.1.2 Increasingly Large and Complex Resumes Screening and Reduction in Biases Hiring Decision 4.1.3 Growing Emphasis on HR Process Automation4.2 Restraints4.2.1 Lack of Standard Regulatory Policies and Data Regulations 4.2.2 Reluctance Among HR to Adopt AI-Based Technologies4.3 Opportunities4.3.1 Collaboration and Partnership with the HR Organization 4.3.2 Technological Advances in AI for HR4.4 Challenges4.4.1 Privacy and Security Concerns 4.4.2 Requiement of Human Aspect in HR

5 Market Analysis5.1 Regulatory Scenario5.2 Porter's Five Forces Analysis5.3 Impact of COVID-195.4 Ansoff Matrix Analysis

6 Global Artificial Intelligence in HR Market, By Offering6.1 Introduction6.2 Hardware 6.2.1 Processor 6.2.2 Memory 6.2.3 Network 6.3 Software 6.3.1 AI Solutions 6.3.2 AI Platform 6.4 Services 6.4.1 Deployment & Integration 6.4.2 Support & Maintenance 6.4.3 Training & Consulting

7 Global Artificial Intelligence in HR Market, By Technology7.1 Introduction7.2 Machine Learning7.2.1 Deep Learning7.2.2 Supervised Learning7.2.3 Reinforced Learning7.2.4 Unsupervised Learning7.2.5 Others7.3 Natural Language Processing7.4 Context-aware Computing7.5 Computer Vision

8 Global Artificial Intelligence in HR Market, By Application8.1 Introduction8.2 Recruitment8.3 Performance Management8.4 Retention8.5 Payroll8.6 Safety and Security8.7 Regulatory Compliance8.8 Others

9 Global Artificial Intelligence in HR Market, By Industry9.1 Introduction9.2 Academic9.3 BFSI9.4 Government9.5 Healthcare9.6 IT & Telecom9.7 Manufacturing9.8 Retail9.9 Others

10 Americas' Artificial Intelligence in HR Market10.1 Introduction10.2 Argentina10.3 Brazil10.4 Canada10.5 Chile10.6 Colombia10.7 Mexico10.8 Peru10.9 United States10.10 Rest of Americas

11 Europe's Artificial Intelligence in HR Market11.1 Introduction11.2 Austria11.3 Belgium11.4 Denmark11.5 Finland11.6 France11.7 Germany11.8 Italy11.9 Netherlands11.10 Norway11.11 Poland11.12 Russia11.13 Spain11.14 Sweden11.15 Switzerland11.16 United Kingdom11.17 Rest of Europe

12 Middle East and Africa's Artificial Intelligence in HR Market12.1 Introduction12.2 Egypt12.3 Israel12.4 Qatar12.5 Saudi Arabia12.6 South Africa12.7 United Arab Emirates12.8 Rest of MEA

13 APAC's Artificial Intelligence in HR Market13.1 Introduction

14 Competitive Landscape14.1 Competitive Quadrant14.2 Market Share Analysis14.3 Strategic Initiatives

15 Company Profiles

For more information about this report visit https://www.researchandmarkets.com/r/ojpc01

Attachment

Here is the original post:
Worldwide Artificial Intelligence in HR Market to 2027 - Integration of Cloud and Mobile Deployment in HRM Systems Drives Growth - Yahoo Finance

Learning From Data – Online Course (MOOC)

Outline

This is an introductory course in machine learning (ML) that covers the basic theory, algorithms, and applications. ML is a key technology in Big Data, and in many financial, medical, commercial, and scientific applications. It enables computational systems to adaptively improve their performance with experience accumulated from the observed data. ML has become one of the hottest fields of study today, taken up by undergraduate and graduate students from 15 different majors at Caltech. This course balances theory and practice, and covers the mathematical as well as the heuristic aspects. The lectures below follow each other in a story-like fashion:

The 18 lectures are about 60 minutes each plus Q&A. The content of each lecture is color coded:

The Learning Problem - Introduction; supervised, unsupervised, and reinforcement learning. Components of the learning problem.

Is Learning Feasible? - Can we generalize from a limited sample to the entire space? Relationship between in-sample and out-of-sample.

The Linear Model I - Linear classification and linear regression. Extending linear models through nonlinear transforms.

Error and Noise - The principled choice of error measures. What happens when the target we want to learn is noisy.

Training versus Testing - The difference between training and testing in mathematical terms. What makes a learning model able to generalize?

Theory of Generalization - How an infinite model can learn from a finite sample. The most important theoretical result in machine learning.

The VC Dimension - A measure of what it takes a model to learn. Relationship to the number of parameters and degrees of freedom.

Bias-Variance Tradeoff - Breaking down the learning performance into competing quantities. The learning curves.

The Linear Model II - More about linear models. Logistic regression, maximum likelihood, and gradient descent.

Neural Networks - A biologically inspired model. The efficient backpropagation learning algorithm. Hidden layers.

Overfitting - Fitting the data too well; fitting the noise. Deterministic noise versus stochastic noise.

Regularization - Putting the brakes on fitting the noise. Hard and soft constraints. Augmented error and weight decay.

Validation - Taking a peek out of sample. Model selection and data contamination. Cross validation.

Support Vector Machines - One of the most successful learning algorithms; getting a complex model at the price of a simple one.

Kernel Methods - Extending SVM to infinite-dimensional spaces using the kernel trick, and to non-separable data using soft margins.

Radial Basis Functions - An important learning model that connects several machine learning models and techniques.

Three Learning Principles - Major pitfalls for machine learning practitioners; Occam's razor, sampling bias, and data snooping.

Epilogue - The map of machine learning. Brief views of Bayesian learning and aggregation methods.

You can also look for a particular topic within the lectures in the Machine Learning Video Library.

This course was broadcast live from the lecture hall at Caltech in April and May 2012. There was no 'Take 2' for the recorded videos. The lectures included live Q&A sessions with online audience participation. Here is a sample of a live lecture as the online audience saw it in real time.

See original here:
Learning From Data - Online Course (MOOC)

How the Machines Are Learning to Get Smarter – Design News

Recent developments in AI (Artificial Intelligence) technology have led to many breakthroughs and exponential growth for machines. The extent to which the entire world now relies on machines knows no bounds. In fact, at this point, AI solutions are not just a key investment opportunity for large corporations but also a major contributor towards addressing countless day-to-day problems in our lives.

A key subset of AI is machine learning, often simply known as ML. It is only due to the invaluable work that researchers and scientists put into the foundations of ML that we are now capable of harvesting maximum performance from highly competent AI-based technologies.

Related: Human Brain Inspires Design of Chips That Can Rewire Themselves

In this article, we will talk about how, over the years, humans have made machines capable of intelligence, i.e., the ability to mimic the human thought process and make decisions based on experiences.

Before we talk about the different methodologies using which humans teach machines to behave like humans, let us go over the basic definition of machine learning.

Related: Researchers Use AI and Stimulation to Strengthen the Brain

Machine learning is the method via which humans teach machines to learn from a set of historical data and enable them to perform certain actions in the future based on their past learning. Machine learning is a combination of many things, from computer algorithms and data analytics to mathematics and statistics. It is the technology that the construction of artificially intelligent systems heavily relies on.

The process of making machines learn from historical data is known as training.

The science of machine learning revolves around teaching the machine by using datasets of different sizes composed of useful or random facts and/or figures and feeding them to the machine. The essence of this activity is to help the machine observe the data, establish meaningful connections between the different pieces of the supplied information, and prepare to make decisions about incoming data by incorporating these pre-established connections, also known as rules.

Machine learning models often follow one or more of the following primary training methods.

For the initial training, we use a dataset where the input and/or expected output may or may not be clearly defined. The process of training utilizes training data. Once the machine has been trained, it is fed test data to find out whether the machine has learned from the training dataset or not.

Let us go over each of these training methods in a tad more detail and explore how they are used to make machines smarter.

This type of machine learning algorithm makes use of a dataset that contains labeled data. It means you tell the machine what each item is. This way, we can theoretically pre-define the rules and all that the machine has to do is study the existing mappings and learn these rules.

We can further split supervised learning algorithms into two sub-types, classification, and regression.

Classification: This method is employed when the machine has to be trained to answer in binary terms, such as yes-no, good-bad, or true-false. The training data consists of items that have already been classified into various categories. For each category, the machine studies each item closely and identifies characteristics that are common for all the items within that category. This allows the machine to build relationships between items and their respective categories. It uses these rules to identify items in the test data and correctly classify them.

Regression: The regression model is employed when you need predictions in terms of numeric values, such as housing prices or temperatures. The training dataset contains multiple variables along with outputs that may or may not be dependent on said variables. The machine studies the input variables and figures out how, if at all, each variable affects the value of the output, leading to pattern recognition or the development of rules. For the test data, the machine uses these rules to calculate an estimate or a predicted value for the output.

The key difference between supervised and unsupervised learning is that the items are not labeled in the dataset used for the latter. Let us use an example to demonstrate this in a better manner.

Let us say that you want a machine to be able to classify the items in a dataset containing images of different types of gardening tools, such as trowels, shovels, rakes, and spades.

Under supervised learning, your training data would contain images along with their identifiers. For example, if you are inputting the image of a spade, you will tell the machine that it is a spade. The machine will then study all the spades and their common features to learn how to identify a spade in the future.

However, if you use the unsupervised learning model, you would input pictures of all sorts of gardening tools without labeling them. For example, if you input a picture of a spade, you will not tell the machine that it is a spade. The machine will have to figure out on its own how each image may (or may not) be related to the ones before it, and then put similar images into one category. Thus, the machine learns to form categories on its own without being explicitly told what the categories are. This type of training model works well for datasets where structures or patterns might not be apparent to the average human.

The third prominent method is based on the concept of reinforcement, which some of you might be familiar with if you have ever taken a Psychology 101 course. If you have ever tried to teach your dog some cool tricks by motivating it with treats, you have made use of the reward system.

Unlike the first two methods, this model relies greatly on feedback. For each decision made by the machine, you tell the machine the correct output so that it can figure out whether it made a good or bad prediction. Through repeated trial-and-error, the machine becomes increasingly accurate.

A simple real-world example of reinforcement learning can be seen in the display of online ads. The machine can determine which ads are more successful and worth showing based on how many people click on it. If the machine gets more clicks (higher reward) on a certain ad from a particular target group, it will know that the decision to display that ad to that group was a good one.

While some people seem determined on trying to settle the humans vs machines debate once and for all, others believe that this type of comparison is futile. The fact remains that the human being came first, and the machine followed. As long as our passion for growth and our need for perfection is alive, machine learning algorithms will continue to improve and become increasingly accurate, helping us achieve seemingly impossible success and accuracy rates.

Ralf Llanasas is a digital marketing expert and freelance writer. Has graduated with abachelor's degree in Information Technology, he mostly writes topics related to marketing, technology, and SaaS trends. His writingcan be seen in several publications aimed at the IT industry. He is also into photography and loves taking pictures when he is free.

More:
How the Machines Are Learning to Get Smarter - Design News