Archive for the ‘Artificial Intelligence’ Category

How FIS Is Using Artificial Intelligence To Monitor And Prevent Cyber Fraud – Analytics India Magazine

Business continuity amid the COVID-19 lockdown is a big issue for all companies. Firms are not just at risk of facing outages, but also face continuous data security vulnerabilities and cyber threats. As per a study by PwC, the volume of cyberattacks on Indian companies has gone exponential as cybercriminals utilise the new work paradigm brought about by the COVID-19 outbreak to infiltrate corporate networks and steal data.

With the lockdown around the world, employees are expected to continue working remotely, which is undoubtedly a threat to most companies as the network perimeter has expanded radically. In the new work setting, fraudsters are using fake emails, websites, and VPAs (Virtual Payment Address) for fraud and social engineering.

To understand the situation better, Analytics India Magazine connected with Bharat Panchal, Chief Risk Officer India, Middle-East & Africa, Fidelity National Information Services (FIS) a Fortune 500 company and a leading provider of technology solutions for merchants, banks and capital markets firms globally.

Bharat comes with extensive leadership experience in managing cyber threats. Prior to his current role at FIS, he served as the SVP & Head Risk Management & Compliance at National Payments Corporation Of India (NPCI), and previously was also the Vice President and Group InfoSec Audit Head at Kotak Mahindra Bank.

According to Bharat, to mitigate cyber threats and protect data, FIS is taking a comprehensive and multi-layered approach. We make use of advanced tools that include artificial intelligence to monitor and detect fraudulent transactions on a real-time basis, he said. The system continuously monitors various threat vectors and advises our customers to remain vigilant against such cyberattacks.

Here are the edited excerpts from the interaction:

With India under lockdown, organisations are increasingly allowing employees to work from home. However, as greater numbers of staff access sensitive data and process remotely, the possibility of a data breach, accidental data loss, virus or malware attack is a major risk for businesses across the country. The biggest risk is around accidental or unintentional leaks of sensitive information given the potential for reputation loss, customer claims and regulatory actions.

Cloud-based platforms are a key component to enabling business continuity during remote-working. The best line of defence for organisations looking to protect against platform vulnerabilities is ensuring employees are only using licensed platforms, a security-aware employee base, and the automatic deployment of all available security patches in a timely fashion.

The fraudsters are smart and try to find opportunity in every situation. In the current environment, fake emails for donations, emergency medical support, a charity for migrant labours, feeding to daily wagers etc. are rampant; people could easily be tricked into giving donations on those fake accounts possessed by fraudsters. The moratorium by RBI of EMI of any loan is a good attempt to ease the situation for the middle class. But, fraudsters have started making fake calls/messages to gullible customers asking for OTP to delay their EMIs and make use of pre-collected information about a customer to steal money from their account.

Fraudsters are using fake emails, websites, and VPAs (Virtual Payment Address) to solicit donations for a range of fraudulent matters ranging from emergency medical support, charity for migrant labourers, food for daily wagers, to fake hospitals, medicine, and people infected during the pandemic. Businesses can reduce these incidents by monitoring network traffic, transaction patterns, and user access habits. Companies can also reduce data security risks by restricting access to systems and emails for non-critical staff.

FIS takes a comprehensive and multi-layered approach to risk and security. We also make use of advanced tools including artificial intelligence to monitor and detect fraudulent transactions on a real-time basis. Our risk engine with Artificial intelligence is capable of predicting a probability of fraudulent transactions which helps our customers. We continuously monitor various threat vectors and our advice to our customers is to remain vigilant against such cyber-attacks.

comments

The rest is here:
How FIS Is Using Artificial Intelligence To Monitor And Prevent Cyber Fraud - Analytics India Magazine

What Is Artificial Intelligence (AI)? | PCMag

In September 1955, John McCarthy, a young assistant professor of mathematics at Dartmouth College, boldly proposed that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."

McCarthy called this new field of study "artificial intelligence," and suggested that a two-month effort by a group of 10 scientists could make significant advances in developing machines that could "use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves."

At the time, scientists optimistically believed we would soon have thinking machines doing any work a human could do. Now, more than six decades later, advances in computer science and robotics have helped us automate many of the tasks that previously required the physical and cognitive labor of humans.

But true artificial intelligence, as McCarthy conceived it, continues to elude us.

A great challenge with artificial intelligence is that it's a broad term, and there's no clear agreement on its definition.

As mentioned, McCarthy proposed AI would solve problems the way humans do: "The ultimate effort is to make computer programs that can solve problems and achieve goals in the world as well as humans," McCarthy said.

Andrew Moore, Dean of Computer Science at Carnegie Mellon University, provided a more modern definition of the term in a 2017 interview with Forbes: "Artificial intelligence is the science and engineering of making computers behave in ways that, until recently, we thought required human intelligence."

But our understanding of "human intelligence" and our expectations of technology are constantly evolving. Zachary Lipton, the editor of Approximately Correct, describes the term AI as "aspirational, a moving target based on those capabilities that humans possess but which machines do not." In other words, the things we ask of AI change over time.

For instance, In the 1950s, scientists viewed chess and checkers as great challenges for artificial intelligence. But today, very few would consider chess-playing machines to be AI. Computers are already tackling much more complicated problems, including detecting cancer, driving cars, and processing voice commands.

The first generation of AI scientists and visionaries believed we would eventually be able to create human-level intelligence.

But several decades of AI research have shown that replicating the complex problem-solving and abstract thinking of the human brain is supremely difficult. For one thing, we humans are very good at generalizing knowledge and applying concepts we learn in one field to another. We can also make relatively reliable decisions based on intuition and with little information. Over the years, human-level AI has become known as artificial general intelligence (AGI) or strong AI.

The initial hype and excitement surrounding AI drew interest and funding from government agencies and large companies. But it soon became evident that contrary to early perceptions, human-level intelligence was not right around the corner, and scientists were hard-pressed to reproduce the most basic functionalities of the human mind. In the 1970s, unfulfilled promises and expectations eventually led to the "AI winter," a long period during which public interest and funding in AI dampened.

It took many years of innovation and a revolution in deep-learning technology to revive interest in AI. But even now, despite enormous advances in artificial intelligence, none of the current approaches to AI can solve problems in the same way the human mind does, and most experts believe AGI is at least decades away.

The flipside, narrow or weak AI doesn't aim to reproduce the functionality of the human brain, and instead focuses on optimizing a single task. Narrow AI has already found many real-world applications, such as recognizing faces, transforming audio to text, recommending videos on YouTube, and displaying personalized content in the Facebook News Feed.

Many scientists believe that we will eventually create AGI, but some have a dystopian vision of the age of thinking machines. In 2014, renowned English physicist Stephen Hawking described AI as an existential threat to mankind, warning that "full artificial intelligence could spell the end of the human race."

In 2015, Y Combinator President Sam Altman and Tesla CEO Elon Musk, two other believers in AGI, co-founded OpenAI, a nonprofit research lab that aims to create artificial general intelligence in a manner that benefits all of humankind. (Musk has since departed.)

Others believe that artificial general intelligence is a pointless goal. "We don't need to duplicate humans. That's why I focus on having tools to help us rather than duplicate what we already know how to do. We want humans and machines to partner and do something that they cannot do on their own," says Peter Norvig, Director of Research at Google.

Scientists such as Norvig believe that narrow AI can help automate repetitive and laborious tasks and help humans become more productive. For instance, doctors can use AI algorithms to examine X-ray scans at high speeds, allowing them to see more patients. Another example of narrow AI is fighting cyberthreats: Security analysts can use AI to find signals of data breaches in the gigabytes of data being transferred through their companies' networks.

Early AI-creation efforts were focused on transforming human knowledge and intelligence into static rules. Programmers had to meticulously write code (if-then statements) for every rule that defined the behavior of the AI. The advantage of rule-based AI, which later became known as "good old-fashioned artificial intelligence" (GOFAI), is that humans have full control over the design and behavior of the system they develop.

Rule-based AI is still very popular in fields where the rules are clearcut. One example is video games, in which developers want AI to deliver a predictable user experience.

The problem with GOFAI is that contrary to McCarthy's initial premise, we can't precisely describe every aspect of learning and behavior in ways that can be transformed into computer rules. For instance, defining logical rules for recognizing voices and imagesa complex feat that humans accomplish instinctivelyis one area where classic AI has historically struggled.

An alternative approach to creating artificial intelligence is machine learning. Instead of developing rules for AI manually, machine-learning engineers "train" their models by providing them with a massive amount of samples. The machine-learning algorithm analyzes and finds patterns in the training data, then develops its own behavior. For instance, a machine-learning model can train on large volumes of historical sales data for a company and then make sales forecasts.

Deep learning, a subset of machine learning, has become very popular in the past few years. It's especially good at processing unstructured data such as images, video, audio, and text documents. For instance, you can create a deep-learning image classifier and train it on millions of available labeled photos, such as the ImageNet dataset. The trained AI model will be able to recognize objects in images with accuracy that often surpasses humans. Advances in deep learning have pushed AI into many complicated and critical domains, such as medicine, self-driving cars, and education.

One of the challenges with deep-learning models is that they develop their own behavior based on training data, which makes them complex and opaque. Often, even deep-learning experts have a hard time explaining the decisions and inner workings of the AI models they create.

Here are some of the ways AI is bringing tremendous changes to different domains.

Self-driving cars: Advances in artificial intelligence have brought us very close to making the decades-long dream of autonomous driving a reality. AI algorithms are one of the main components that enable self-driving cars to make sense of their surroundings, taking in feeds from cameras installed around the vehicle and detecting objects such as roads, traffic signs, other cars, and people.

Digital assistants and smart speakers: Siri, Alexa, Cortana, and Google Assistant use artificial intelligence to transform spoken words to text and map the text to specific commands. AI helps digital assistants make sense of different nuances in spoken language and synthesize human-like voices.

Translation: For many decades, translating text between different languages was a pain point for computers. But deep learning has helped create a revolution in services such as Google Translate. To be clear, AI still has a long way to go before it masters human language, but so far, advances are spectacular.

Facial recognition: Facial recognition is one of the most popular applications of artificial intelligence. It has many uses, including unlocking your phone, paying with your face, and detecting intruders in your home. But the increasing availability of facial-recognition technology has also given rise to concerns regarding privacy, security, and civil liberties.

Medicine: From detecting skin cancer and analyzing X-rays and MRI scans to providing personalized health tips and managing entire healthcare systems, artificial intelligence is becoming a key enabler in healthcare and medicine. AI won't replace your doctor, but it could help to bring about better health services, especially in underprivileged areas, where AI-powered health assistants can take some of the load off the shoulders of the few general practitioners who have to serve large populations.

In our quest to crack the code of AI and create thinking machines, we've learned a lot about the meaning of intelligence and reasoning. And thanks to advances in AI, we are accomplishing tasks alongside our computers that were once considered the exclusive domain of the human brain.

Some of the emerging fields where AI is making inroads include music and arts, where AI algorithms are manifesting their own unique kind of creativity. There's also hope AI will help fight climate change, care for the elderly, and eventually create a utopian future where humans don't need to work at all.

There's also fear that AI will cause mass unemployment, disrupt the economic balance, trigger another world war, and eventually drive humans into slavery.

We still don't know which direction AI will take. But as the science and technology of artificial intelligence continues to improve at a steady pace, our expectations and definition of AI will shift, and what we consider AI today might become the mundane functions of tomorrow's computers.

Read more:
What Is Artificial Intelligence (AI)? | PCMag

Artificial intelligence | NIST

Credit: N. Hanacek/NIST

Artificial Intelligence (AI) is rapidly transforming our world. Remarkable surges in AI capabilities have led to a number of innovations including autonomous vehicles and connected Internet of Things devices in our homes. AI is even contributing to the development of a brain-controlled robotic arm that can help a paralyzed person feel again through complex direct human-brain interfaces. These new AI-enabled systems are revolutionizing everything from commerce and healthcare to transportation and cybersecurity.

AI has the potential to impact nearly all aspects of our society, including our economy, but the development and use of the new technologies it brings are not without technical challenges and risks. AI must be developed in a trustworthy manner to ensure reliability, safety and accuracy.

NIST has a long-standing reputation for cultivating trust in technology by participating in the development of standards and metrics that strengthen measurement science and make technology more secure, usable, interoperable and reliable. This work is critical in the AI space to ensure public trust of rapidly evolving technologies, so that we can benefit from all that this field has to promise.

AI systems typically make decisions based on data-driven models created by machine learning, or the systems ability to detect and derive patterns. As the technology advances, we will need to develop rigorous scientific testing that ensures secure, trustworthy and safe AI. We also need to develop a broad spectrum of standards for AI data, performance, interoperability, usability, security and privacy.

NIST participates in interagency efforts to further innovation in AI. NIST Director and Undersecretary of Commerce for Standards and Technology Walter Copan serves on the White House Select Committee on Artificial Intelligence. Charles Romine, Director of NISTs Information Technology Laboratory, serves on the Machine Learning and AI Subcommittee.

A February 11, 2019,Executive Order on Maintaining American Leadership in Artificial Intelligence tasks NIST with developing a plan for Federal engagement in the development of technical standards and related tools in support of reliable, robust, and trustworthy systems that use AI technologies. For more information, see: https://www.nist.gov/topics/artificial-intelligence/ai-standards.

NIST research in AI is focused on how to measure and enhance the security and trustworthiness of AI systems. This includes participation in the development of international standards that ensure innovation, public trust and confidence in systems that use AI technologies. In addition, NIST is applying AI to measurement problems to gain deeper insight into the research itself as well as to better understand AIs capabilities and limitations.

The NIST AI program has two major goals:

The recently launched AI Visiting Fellowprogram brings nationally recognized leaders in AI and machine learning to NIST to share their knowledge and experience and to provide technical support.

See the article here:
Artificial intelligence | NIST

A.I. Artificial Intelligence (2001) – IMDb

Nominated for 2 Oscars. Another 17 wins & 68 nominations. See more awards Learn more More Like This

Drama | Sci-Fi

Roy Neary, an electric lineman, watches how his quiet and ordinary daily life turns upside down after a close encounter with a UFO.

Director:Steven Spielberg

Stars:Richard Dreyfuss,Franois Truffaut,Teri Garr

Comedy | Drama | Sci-Fi

An android endeavors to become human as he gradually acquires emotions.

Director:Chris Columbus

Stars:Robin Williams,Embeth Davidtz,Sam Neill

Biography | Drama | History

In 1839, the revolt of Mende captives aboard a Spanish owned ship causes a major controversy in the United States when the ship is captured off the coast of Long Island. The courts must decide whether the Mende are slaves or legally free.

Director:Steven Spielberg

Stars:Djimon Hounsou,Matthew McConaughey,Anthony Hopkins

Drama | Mystery | Sci-Fi

Dr. Ellie Arroway, after years of searching, finds conclusive radio proof of extraterrestrial intelligence, sending plans for a mysterious machine.

Director:Robert Zemeckis

Stars:Jodie Foster,Matthew McConaughey,Tom Skerritt

Action | Crime | Mystery

In a future where a special police unit is able to arrest murderers before they commit their crimes, an officer from that unit is himself accused of a future murder.

Director:Steven Spielberg

Stars:Tom Cruise,Colin Farrell,Samantha Morton

Action | Drama | History

Based on the true story of the Black September aftermath, about the five men chosen to eliminate the ones responsible for that fateful day.

Director:Steven Spielberg

Stars:Eric Bana,Daniel Craig,Marie-Jose Croze

Adventure | Sci-Fi | Thriller

As Earth is invaded by alien tripod fighting machines, one family fights for survival in this sci-fi action film.

Director:Steven Spielberg

Stars:Tom Cruise,Dakota Fanning,Tim Robbins

Action | Drama | History

A young English boy struggles to survive under Japanese occupation during World War II.

Director:Steven Spielberg

Stars:Christian Bale,John Malkovich,Miranda Richardson

Drama

A black Southern woman struggles to find her identity after suffering abuse from her father and others over four decades.

Director:Steven Spielberg

Stars:Danny Glover,Whoopi Goldberg,Oprah Winfrey

Action | Adventure | Drama

Young Albert enlists to serve in World War I after his beloved horse is sold to the cavalry. Albert's hopeful journey takes him out of England and to the front lines as the war rages on.

Director:Steven Spielberg

Stars:Jeremy Irvine,Emily Watson,David Thewlis

Drama | Sci-Fi | Thriller

A genetically inferior man assumes the identity of a superior one in order to pursue his lifelong dream of space travel.

Director:Andrew Niccol

Stars:Ethan Hawke,Uma Thurman,Jude Law

Comedy | Drama | Romance

An Eastern European tourist unexpectedly finds himself stranded in JFK airport, and must take up temporary residence there.

Director:Steven Spielberg

Stars:Tom Hanks,Catherine Zeta-Jones,Chi McBride

In the not-so-far future the polar ice caps have melted and the resulting rise of the ocean waters has drowned all the coastal cities of the world. Withdrawn to the interior of the continents, the human race keeps advancing, reaching the point of creating realistic robots (called mechas) to serve them. One of the mecha-producing companies builds David, an artificial kid which is the first to have real feelings, especially a never-ending love for his "mother", Monica. Monica is the woman who adopted him as a substitute for her real son, who remains in cryo-stasis, stricken by an incurable disease. David is living happily with Monica and her husband, but when their real son returns home after a cure is discovered, his life changes dramatically. Written byChris Makrozahopoulos

Budget:$100,000,000 (estimated)

Opening Weekend USA: $29,352,630,1 July 2001

Gross USA: $78,616,689

Cumulative Worldwide Gross: $235,926,552

Runtime: 146 min

Aspect Ratio: 1.85 : 1

Read the original post:
A.I. Artificial Intelligence (2001) - IMDb

Artificial Intelligence in Human Resource Management

While, in the past, artificial intelligence may have been thought to be a product of science fiction, most professionals today understand that the adoption of smart technology is actively changing workplaces. There are applications of AI throughout nearly every profession and industry, and human resources careers are no exception.

A recent survey conducted by Oracle and Future Workplace found that human resources professionals believe AI can present opportunities for mastering new skills and gaining more free time, allowing HR professionals to expand their current roles in order to be more strategic within their organization.

Among HR leaders who participated in the survey, however, 81 percent said that they find it challenging to keep up with the pace of technological changes at work. As such, it is more important now than ever before for human resources professionals to understand the ways in which AI is reshaping the industry.

Read on to explore what artificial intelligence entails, how it is applied to the world of human resources management, and how HR professionals can prepare for the future of the field today.

At a high level, artificial intelligence (AI) is a technology that allows computers to learn from and make or recommend actions based on previously collected data. In terms of human resources management, artificial intelligence can be applied in many different ways to streamline processes and improve efficiency.

Uwe Hohgrawe, lead faculty for Northeasterns Master of Professional Studies in Analytics program explains that we as humans see the information in front of us and use our intelligence to draw conclusions. Machines are not intelligent, but we can make them appear intelligent by feeding them the right information and technology.

Learn More: AI & Other Trends Defining the HRM Industry

While organizations are adopting AI into their human resources processes at varying rates, it is clear to see that the technology will have a lasting impact on the field as it becomes more widely accepted. For this reason, it is important that HR professionals prepare themselves for these changes by understanding what the technology is and how it is applied across various functions.

Learn more about earning an advanced degree in Human Resources Management

LEARN MORE

Among the numerous applications of AI in the human resources sector, some of the first changes HR professionals should expect to see involve recruitment and onboarding, employee experience, process improvement, and the automation of administrative tasks.

While many organizations are already beginning to integrate AI technology into their recruiting efforts, the vast majority of organizations are not. In fact, Deloittes 2019 Global Human Capital Trends survey found that only 6 percent of respondents believed that they had the best-in-class recruitment processes in technology, while 81 percent believed their organizations processes were standard or below standard. For this reason, there are tremendous opportunities for professionals to adapt their processes and reap the benefits of using this advanced technology.

During the recruitment process, AI can be used to the benefit of not only the hiring organization but its job applicants, as well. For example, AI technology can streamline application processes by designing more user-friendly forms that a job applicant is more likely to complete, effectively reducing the number of abandoned applications.

While this approach has made the role of the human resources department in recruitment much easier, artificial intelligence also allows for simpler and more meaningful applications on the candidates end, which has been shown to improve application completion rates.

Additionally, AI has played an important role in candidate rediscovery. By maintaining a database of past applicants, AI technology can analyze the existing pool of applicants and identify those that would be a good fit for new roles as they open up. Rather than expending time and resources searching for fresh talent, HR professionals can use this technology to identify qualified employees more quickly and easily than ever before.

Once hiring managers have found the best fit for their open positions, the onboarding process begins. With the help of AI, this process doesnt have to be restricted to standard business hoursa huge improvement over onboarding processes of the past.

Instead, AI technology allows new hires to utilize human resources support at any time of day and in any location through the use of chatbots and remote support applications. This change not only provides employees with the ability to go through the onboarding process at their own pace, but also reduces the administrative burden and typically results in faster integration.

In addition to improvements to the recruitment process, HR professionals can also utilize artificial intelligence to boost internal mobility and employee retention.

Through personalized feedback surveys and employee recognition systems, human resources departments can gauge employee engagement and job satisfaction more accurately today than ever before. This is incredibly beneficial considering how important it is to understand the overall needs of employees, however there are several key organizational benefits to having this information, as well.

According to a recent report from the Human Resources Professional Association, some AI software can evaluate key indicators of employee success in order to identify those that should be promoted, thus driving internal mobility. Doing so has the potential to significantly reduce talent acquisition costs and bolster employee retention rates.

This technology is not limited to identifying opportunities to promote from within, however; it can also predict who on a team is most likely to quit. Having this knowledge as soon as possible allows HR professionals to deploy retention efforts before its too late, which can strategically reduce employee attrition.

One of the key benefits of leveraging artificial intelligence in various human resources processes is actually the same as it is in other disciplines and industries: Automating low value, easily repeatable administrative tasks gives HR professionals more time to contribute to strategic planning at the organizational level. This, in turn, enables the HR department to become a strategic business partner within their organizations.

Smart technologies can automate processes such as the administration of benefits, pre-screening candidates, scheduling interviews, and more. Although each of these functions is important to the overall success of an organization, carrying out the tasks involved in such processes is generally time-consuming, and the burden of these duties often means that HR professionals have less time to contribute to serving their employees in more impactful ways.

Deploying AI software to automate administrative tasks can ease this burden. For instance, a study by Eightfold found that HR personnel who utilized AI software performed administrative tasks 19 percent more effectively than departments that do not use such technology. With the time that is saved, HR professionals can devote more energy to strategic planning at the organizational level.

While it is clear that artificial intelligence will continue to positively shape the field of human resources management in the coming years, HR professionals should also be aware of the challenges that they might face.

The most common concerns that HR leaders have focus primarily on making AI simpler and safer to use. In fact, the most common factor preventing people from using AI at work are security and privacy concerns. Additionally, 31 percent of respondents in Oracles survey expressed that they would rather interact with a human in the workplace than a machine. Moving forward, HR professionals will need to be prepared to address these concerns by staying on top of trends and technology as they evolve and change.

People will need to be aware of ethical and privacy questions when using this technology, Hohgrowe says. In human resources, [AI] can involve using sensitive information to create sensitive insights.

For instance, employees want their organizations to respect their personal data and ask for permission before using such technology to gather information about them. However organizations also want to feel protected from data breaches, and HR professionals must take the appropriate security measures into account.

To prepare for the future of human resources management, professionals should take the necessary steps to learn about current trends in the field, as well as lay a strong foundation of HR knowledge that they can build upon as the profession evolves.

Staying up to date with industry publications and networking with leaders in the field is a great way to stay abreast of current trends like the rapid adoption of artificial intelligence technologies. Building your foundational knowledge of key human resource management theories, strategy, and ethics, on the other hand, is best achieved through higher education.

Although there are many certifications and courses available that focus on specific HR topics, earning an advanced degree like a Master of Science in Human Resources Management provides students with a more holistic approach to understanding the connection between an organization and its people.

At Northeastern, we highlight the importance of three literacies: data literacy, technological literacy, and humanic literacy. That combination is one of the areas where I believe we will pave the way in the future, Hohgrawe says. This also allows us to explore augmented artificial intelligence in a way that appreciates the relationship between human, machine, and data.

Students looking to specialize in AI also have the opportunity to declare a concentration in artificial intelligence within Northeasterns human resource management program. Those who specialize in this specific aspect of the industry will study topics such as human resources information processing, advanced analytical utilization, and AI communication and visualization. Similarly, those who seek a more technical masters degree might consider a Northeasterns Master of Professional Studies in Enterprise Intelligence, which also includes a concentration in AI for human resources.

No matter each students specific path, however, those who choose to study at Northeastern will have the unique chance to learn from practitioners with advanced knowledge and experience in the field. Many of Northeasterns faculty have previously or are currently working in the human resources management field, enabling them to bring a unique perspective to the classroom and educate students on the real-world challenges that HR professionals face today.

Between the world-class faculty members and the multitude of experiential learning opportunities provided during the pursuit of a masters degree, aspiring HR professionals will graduate from Northeasterns program with the unique combination of experience and expertise needed to land a lucrative role in this growing field.

Interested in advancing your career in HR? Explore Northeasterns Master of Science in Human Resources Management program and consider taking the next step toward a career in this in-demand industry.

The rest is here:
Artificial Intelligence in Human Resource Management