Archive for the ‘Machine Learning’ Category

Machine Learning Project Aims To Improve AM Metrology and Quality News – Online Magazine – "metrology news"

Machine learning technology will be used to make the additive manufacturing (AM) process of metallic alloys for aerospace cheaper and faster, encouraging production of lightweight, energy-efficient aircraft to support net zero targets for aviation.

The Project MEDAL (Machine Learning for Additive Manufacturing Experimental Design) is led by Intellegens, a University of Cambridge, UK spin-out specialising in artificial intelligence, the University of Sheffield AMRC North West, and global aerospace giant Boeing. It aims to accelerate the product development lifecycle of aerospace components by using a machine learning model to optimise additive manufacturing (AM) processing parameters for new metal alloys at a lower cost and faster rate.

AM is a group of technologies that create 3D objects from computer aided design (CAD) data. AM techniques reduce material waste and energy usage; allow easy prototyping, optimising and improvement of components; and enable the manufacture of components with superior engineering performance over their lifecycle. The global AM market is worth 12bn and that is expected to triple in size over the next five years. Project MEDALs research will concentrate on metal laser powder bed fusion the most widely used AM approach in industry focussing on key parameter variables required to manufacture high density, high strength parts.

The project is part of the National Aerospace Technology Exploitation Programme (NATEP), a 10 million initiative for UK SMEs to develop innovative aerospace technologies funded by the Department for Business, Energy and Industrial Strategy and delivered in partnership with the Aerospace Technology Institute (ATI) and Innovate UK. Intellegens was a start-up in the first group of companies to complete the ATI Boeing Accelerator last year.

Ben Pellegrini, CEO of Intellegens, said: We are very excited to be launching this project in conjunction with the AMRC. The intersection of machine learning, design of experiments and additive manufacturing holds enormous potential to rapidly develop and deploy custom parts not only in aerospace, as proven by the involvement of Boeing, but in medical, transport and consumer product applications.

James Hughes, Research Director for University of Sheffield AMRC North West, said the project will build the AMRCs knowledge and expertise in alloy development so it can help other UK manufacturers.

At the AMRC we have experienced first-hand, and through our partner network, how onerous it is to develop a robust set of process parameters for AM. It relies on a multi-disciplinary team of engineers and scientists and comes at great expense in both time and capital equipment, said Hughes. It is our intention to develop a robust, end-to-end methodology for process parameter development that encompasses how we operate our machinery right through to how we generate response variables quickly and efficiently. Intellegens AI-embedded platform Alchemite will be at the heart of all of this.

There are many barriers to the adoption of metallic AM but by providing users, and maybe more importantly new users, with the tools they need to process a required material should not be one of them. With the AMRCs knowledge in AM, and Intellegens AI tools, all the required experience and expertise is in place in order to deliver a rapid, data-driven software toolset for developing parameters for metallic AM processes to make them cheaper and faster.

Sir Martin Donnelly, president of Boeing Europe and managing director of Boeing in the UK and Ireland, said the project shows how industry can successfully partner with government and academia to spur UK innovation.

We are proud to see this project move forward because of what it promises aviation and manufacturing, and because of what it represents for the UKs innovation ecosystem, Donnelly said. We helped found the AMRC two decades ago, Intellegens was one of the companies we invested in as part of the ATI Boeing Accelerator and we have longstanding research partnerships with Cambridge University and the University of Sheffield. We are excited to see what comes from this continued collaboration and how we might replicate this formula in other ways within the UK and beyond.

Aerospace components have to withstand certain loads and temperature resistances, and some materials are limited in what they can offer. There is also simultaneous push for lower weight and higher temperature resistance for better fuel efficiency, bringing new or previously impractical-to-machine metals into the aerospace material mix.

One of the main drawbacks of AM is the limited material selection currently available and the design of new materials, particularly in the aerospace industry, requires expensive and extensive testing and certification cycles which can take longer than a year to complete and cost as much as 1 million ($1.35 million) to undertake. Project MEDAL aims to accelerate this process, using Machine Learning (ML) to rapidly optimise AM processing parameters for new metal alloys, making the development process more time and cost efficient.

Pellegrini said experimental design techniques are extremely important to develop new products and processes in a cost-effective and confident manner. The most common approach is Design of Experiments (DOE), a statistical method that builds a mathematical model of a system by simultaneously investigating the effects of various factors.

DOE is a more efficient, systematic way of choosing and carrying out experiments compared to the Change One Separate variable at a Time (COST) approach. However, the high number of experiments required to obtain a reliable covering of the search space means that DOE can still be a lengthy and costly process, which can be improved, explained Pellegrini.

The machine learning solution in this project can significantly reduce the need for many experimental cycles by around 80%. The software platform will be able to suggest the most important experiments needed to optimise AM processing parameters, in order to manufacture parts that meet specific target properties. The platform will make the development process for AM metal alloys more time and cost efficient. This will in turn accelerate the production of more lightweight and integrated aerospace components, leading to more efficient aircrafts and improved environmental impact.

Intellegens will produce a software platform with an underlying machine learning algorithm based on its Alchemite platform. It has already been used successfully to overcome material design problems in a University of Cambridge research project with a leading OEM where a new alloy was designed, developed and verified in 18 months rather than the expected 20-year timeline, saving about $10m.

Ian Brooks, AM technical fellow at University of Sheffield North West, said by harnessing two key technologies artificial intelligence and additive manufacturing Project MEDAL.

For more information: http://www.amrc.co.uk

HOME PAGE LINK

Latest Headline News

At the virtual CES 2021 event, San Diego based company, IKIN Inc. unveiled a smartphone accessory, inspired by Sci-Fi Movies that can turn content from into 3D holograms. While most

With the GOM ScanCobot, GOM presents a mobile measuring station with a collaborative robot, motorized rotation table and powerful software. Combined with the compact and high-precision sensor ATOS Q, the

Yxlon has presented the new release of its Cheetah and Cougar EVO microfocus X-ray families at recent online events. Under the motto Innovation is key to Evolution Evolution empowers

Steel plate manufacturing is a multi-step process, often requiring multiple machine adjustments after the smelting process to roll the steel properly. Depending on the plates thickness and quality demands, mill

Static CMM manufacturer LK Metrology has expanded its FREEDOM portable arm range of 3D articulating arm metrology systems with the launch of five additional ultra-accuracy models in both 6-axis and

LMI Technologies (LMI) has announced today that Terry Arden, LMIs Chief Executive Officer, will be stepping down from his full-time CEO role, but will continue in a different role at

Energy Robotics, a developer of software solutions for mobile inspection robots, has recently received two million euros ($4.4 million) in seed funding. The round was led by Earlybird, alongside other

Since 1988, Fujigiken Inc. has been expanding 4 core businesses in Japan: the trial manufacture of car seats, the trial production of cars, small-volume production and supply, and jig production.Fujigiken

Following the announcement of a partnership between DMG MORI and NIKON in May 2019 to integrate non-contact laser-line scanning onto its machine tools DMG MORI has posted a video on

North Star Imaging (NSI) has launched a duplex robot computed tomography system for large manufactured parts. NSIs unique Dual RobotiX precision technology features two robot arms working in synchronized harmony

The trend to move process data directly from the factory floor to the digital cloud creates bandwidth and latency issues that can become a roadblock to real-time reporting. In response,

Bart Van der Schueren, Chief Technology Officer and Materialise Mindware representative, discusses megatrends in manufacturing and how these values can guide companies navigating the industry during a pandemic. Manufacturing has

The Fourth Industrial Revolution (Industry 4.0) is essentially the Digital Age, characterised by a heavy focus on automation, real-time data, connectivity, embedded sensors, and machine learning. Its iconic representation is

Metrology News has selected the premium global events to feature in our monthly calendar providing an at-glance overview of all of the most important upcoming events. Links to event websites

Successful manufacturing depends on speed, accuracy and efficiency. One of the most effective ways to achieve this is through seamless collaboration between people and machines also known as factory

See the article here:
Machine Learning Project Aims To Improve AM Metrology and Quality News - Online Magazine - "metrology news"

CERC plans to embrace AI, machine learning to improve functioning – Business Standard

The apex power sector regulator, the Central Electricity Regulatory Commission (CERC), is planning to set up an artificial intelligence (AI)-based regulatory expert system tool (REST) for improving access to information and assist the commission in discharge of its duties. So far, only the Supreme Court (SC) has an electronic filing (e-filing) system and is in the process of building an AI-based back-end service.

The CERC will be the first such quasi-judicial regulatory body to embrace AI and machine learning (ML). The decision comes at a time when the CERC has been shut for four ...

Key stories on business-standard.com are available to premium subscribers only.

MONTHLY STAR

Business Standard Digital Monthly Subscription

Complete access to the premium product

Convenient - Pay as you go

Pay using Master/Visa Credit Card & ICICI VISA Debit Card

Auto renewed (subject to your card issuer's permission)

Cancel any time in the future

Note: Subscription will be auto renewed, you may cancel any time in the future without any questions asked.

Requires personal information

SMART MONTHLY

Business Standard Digital - 12 Months

Get 12 months of Business Standard digital access

Single Seamless Sign-up to Business Standard Digital

Convenient - Once a year payment

Pay using an instrument of your choice - Credit/Debit Cards, Net Banking, Payment Wallets accepted

Exclusive Invite to select Business Standard events

Note: Subscription will be auto renewed, you may cancel any time in the future without any questions asked.

Business Standard has always strived hard to provide up-to-date information and commentary on developments that are of interest to you and have wider political and economic implications for the country and the world. Your encouragement and constant feedback on how to improve our offering have only made our resolve and commitment to these ideals stronger. Even during these difficult times arising out of Covid-19, we continue to remain committed to keeping you informed and updated with credible news, authoritative views and incisive commentary on topical issues of relevance.We, however, have a request.

As we battle the economic impact of the pandemic, we need your support even more, so that we can continue to offer you more quality content. Our subscription model has seen an encouraging response from many of you, who have subscribed to our online content. More subscription to our online content can only help us achieve the goals of offering you even better and more relevant content. We believe in free, fair and credible journalism. Your support through more subscriptions can help us practise the journalism to which we are committed.

Support quality journalism and subscribe to Business Standard.

Digital Editor

First Published: Fri, January 15 2021. 06:10 IST

Read more:
CERC plans to embrace AI, machine learning to improve functioning - Business Standard

Machine Learning and Life-and-Death Decisions on the Battlefield – War on the Rocks

In 1946 the New York Times revealed one of World War IIs top secrets an amazing machine which applies electronic speeds for the first time to mathematical tasks hitherto too difficult and cumbersome for solution. One of the machines creators offered that its purpose was to replace, as far as possible, the human brain. While this early version of a computer did not replace the human brain, it did usher in a new era in which, according to the historian Jill Lepore, technological change wildly outpaced the human capacity for moral reckoning.

That era continues with the application of machine learning to questions of command and control. The application of machine learning is in some areas already a reality the U.S. Air Force, for example, has used it as a working aircrew member on a military aircraft, and the U.S. Army is using it to choose the right shooter for a target identified by an overhead sensor. The military is making strides toward using machine learning algorithms to direct robotic systems, analyze large sets of data, forecast threats, and shape strategy. Using algorithms in these areas and others offers awesome military opportunities from saving person-hours in planning to outperforming human pilots in dogfights to using a multihypothesis semantic engine to improve our understanding of global events and trends. Yet with the opportunity of machine learning comes ethical risk the military could surrender life-and-death choice to algorithms, and surrendering choice abdicates ones status as a moral actor.

So far, the debate about algorithms role in battlefield choice has been eitheror: Either algorithms should make life-and-death choices because there is no other way to keep pace on an increasingly autonomous battlefield, or humans should make life-and-death choices because there is no other way to maintain moral standing in war. This is a false dichotomy. Choice is not a unitary thing to be handed over either to algorithms or to people. At all levels of decision-making (i.e., tactical, operational, and strategic), choice is the result of a several-step process. The question is not whether algorithms or humans should make life-and-death choices, but rather which steps in the process each should be responsible for. By breaking choice into its constituent parts and training servicemembers in decision science the military can both increase decision speed and maintain moral standing. This article proposes how it can do both. It describes the constituent components of a choice, then discusses which of those components should be performed by machine learning algorithms and which require human input.

What Decisions Are and What It Takes To Make Them

Consider a fighter pilot hunting surface-to-air missiles. When the pilot attacks, she is determining that her choice, relative to other possibilities before her, maximizes expected net benefit, or utility. She may not consciously process the decision in these terms and may not make the calculation perfectly, but she is nonetheless determining which decision optimizes expected costs and benefits. To be clear, the example of the fighter pilot is not meant to bound the discussion. The basic conceptual process is the same whether the decision-makers are trigger-pullers on the front lines or commanders in distant operations centers. The scope and particulars of a decision change at higher levels of responsibility, of course, from risking one unit to many, or risking one bystanders life to risking hundreds. Regardless of where the decision-maker sits or rather where the authority to choose to employ force lawfully resides choice requires the same four fundamental steps.

The first step is to list the alternatives available to the decision-maker. The fighter pilot, again just for example, might have two alternatives: attack the missile system from a relatively safer long-range approach, or attack from closer range with more risk but a higher probability of a successful attack. The second step is to take each of these alternatives and define the relevant possible results. In this case, the pilots relevant outcomes might include killing the missile while surviving, killing the missile without surviving, failing to kill the system but surviving, and, lastly, failing to kill the missile while also failing to survive.

The third step is to make a conditional probability estimate, or an estimate of the likelihood of each result assuming a given alternative. If the pilot goes in close, what is the probability that she kills the missile and survives? What is the same probability for the attack from long range? And so on for each outcome of each alternative.

So far the pilot has determined what she can do, what may happen as a result, and how likely each result is. She now needs to say how much she values each result. To do this she needs to identify how much she cares about each dimension of value at play in the choice, which in highly simplified terms are the benefit to mission that comes from killing the missile, and the cost that comes from sacrificing her life, the lives of targeted combatants, and the lives of bystanders. It is not enough to say that killing the missile is beneficial and sacrificing life is costly. She needs to put benefit and cost into a single common metric, sometimes called a utility, so that the value of one can be directly compared to the value of the other. This relative comparison is known as a value trade-off, the fourth step in the process. Whether the decision-maker is on the tactical edge or making high-level decisions, the trade-off takes the same basic form: The decision-maker weighs the value of attaining a military objective against the cost of dollars and lives (friendly, enemy, and civilian) needed to attain it. This trade-off is at once an ethical and a military judgment it puts a price on life at the same time that it puts a price on a military objective.

Once these four steps are complete, rational choice is a matter of fairly simple math. Utilities are weighted by an outcomes likelihood high-likelihood outcomes get more weight and are more likely to drive the final choice.

It is important to note that, for both human and machine decision-makers, rational is not necessarily the same thing as ethical or successful. The rational choice process is the best way, given uncertainty, to optimize what decision-makers say they value. It is not a way of saying that one has the right values and does not guarantee a good outcome. Good decisions will still occasionally lead to bad outcomes, but this decision-making process optimizes results in the long run.

At least in the U.S. Air Force, pilots do not consciously step through expected utility calculations in the cockpit. Nor is it reasonable to assume that they should performing the mission is challenging enough. For human decision-makers, explicitly working through the steps of expected utility calculations is impractical, at least on a battlefield. Its a different story, however, with machines. If the military wants to use algorithms to achieve decision speed in battle, then it needs to make the components of a decision computationally tractable that is, the four steps above need to reduce to numbers. The question becomes whether it is possible to provide the numbers in such a way that combines the speed that machines can bring with the ethical judgment that only humans can provide.

Where Algorithms Are Better and Where Human Judgment Is Necessary

Computer and data science have a long way to go to exercise the power of machine learning and data representation assumed here. The Department of Defense should continue to invest heavily in the research and development of modeling and simulation capabilities. However, as it does that, we propose that algorithms list the alternatives, define the relevant possible results, and give conditional probability estimates (the first three steps of rational decision-making), with occasional human inputs. The fourth step of determining value should remain the exclusive domain of human judgment.

Machines should generate alternatives and outcomes because they are best suited for the complexity and rule-based processing that those steps require. In the simplified example above there were only two possible alternatives (attack from close or far) with four possible outcomes (kill the missile and survive, kill the missile and dont survive, dont kill the missile and survive, and dont kill the missile and dont survive). The reality of future combat will, of course, be far more complicated. Machines will be better suited for handling this complexity, exploring numerous solutions, and illuminating options that warfighters may not have considered. This is not to suggest, though, that humans will play no role in these steps. Machines will need to make assumptions and pick starting points when generating alternatives and outcomes, and it is here that human creativity and imagination can help add value.

Machines are hands-down better suited for the third step estimating the probabilities of different outcomes. Human judgments of probability tend to rely on heuristics, such as how available examples are in memory, rather than more accurate indicators like relevant base rates, or how often a given event has historically occurred. People are even worse when it comes to understanding probabilities for a chain of events. Even a relatively simple combination of two conditional probabilities is beyond the reach of most people. There may be openings for human input when unrepresentative training data encodes bias into the resulting algorithms, something humans are better equipped to recognize and correct. But even then, the departures should be marginal, rather than the complete abandonment of algorithmic estimates in favor of intuition. Probability, like long division, is an arena best left to machines.

While machines take the lead with occasional human input in steps one through three, the opposite is true for the fourth step of making value trade-offs. This is because value trade-offs capture both ethical and military complexity, as many commanders already know. Even with perfect information (e.g., the mission will succeed but it will cost the pilots life) commanders can still find themselves torn over which decision to make. Indeed, whether and how one should make such trade-offs is the essence of ethical theories like deontology or consequentialism. And prioritization of which military objectives will most efficiently lead to success (however defined) is an always-contentious and critical part of military planning.

As long as commanders and operators remain responsible for trade-offs, they can maintain control and responsibility for the ethicality of the decision even as they become less involved in the other components of the decision process. Of note, this control and responsibility can be built into the utility function in advance, allowing systems to execute at machine speed when necessary.

A Way Forward

Incorporating machine learning and AI into military decision-making processes will be far from easy, but it is possible and a military necessity. China and Russia are using machine learning to speed their own decision-making, and unless the United States keeps pace it risks finding itself at a serious disadvantage on future battlefields.

The military can ensure the success of machine-aided choice by ensuring that the appropriate division of labor between human and machines is well understood by both decision-makers and technology developers.

The military should begin by expanding developmental education programs so that they rigorously and repeatedly cover decision science, something the Air Force has started to do in its Pinnacle sessions, its executive education program for two- and three-star generals. Military decision-makers should learn the steps outlined above, and also learn to recognize and control for inherent biases, which can shape a decision as long as there is room for human input. Decades of decision science research have shown that intuitive decision-making is replete with systematic biases like overconfidence, irrational attention to sunk costs, and changes in risk preference based merely on how a choice is framed. These biases are not restricted just to people. Algorithms can show them as well when training data reflects biases typical of people. Even when algorithms and people split responsibility for decisions, good decision-making requires awareness of and a willingness to combat the influence of bias.

The military should also require technology developers to address ethics and accountability. Developers should be able to show that algorithmically generated lists of alternatives, results, and probability estimates are not biased in such a way as to favor wanton destruction. Further, any system addressing targeting, or the pairing of military objectives with possible means of affecting those objectives, should be able to demonstrate a clear line of accountability to a decision-maker responsible for the use of force. One means of doing so is to design machine learning-enabled systems around the decision-making model outlined in this article, which maintains accountability of human decision-makers through their enumerated values. To achieve this, commanders should insist on retaining the ability to tailor value inputs. Unless input opportunities are intuitive, commanders and troops will revert to simpler, combat-tested tools with which they are more comfortable the same old radios or weapons or, for decision purposes, slide decks. Developers can help make probability estimates more intuitive by providing them in visual form. Likewise, they can make value trade-offs more intuitive by presenting different hypothetical (but realistic) choices to assist decision-makers in refining their value judgements.

The unenviable task of commanders is to imagine a number of potential outcomes given their particular context and assign a numerical score or utility such that meaningful comparisons can be made between them. For example, a commander might place a value of 1,000 points on the destruction of an enemy aircraft carrier and -500 points on the loss of a fighter jet. If this is an accurate reflection of the commanders values, she should be indifferent between an attack with no fighter losses and one enemy carrier destroyed and one that destroys two carriers but costs her two fighters. Both are valued equally at 1,000 points. If the commander strongly prefers one outcome over the other, then the points should be adjusted to better reflect her actual values or else an algorithm using that point system will make choices inconsistent with the commanders values. This is just one example of how to elicit trade-offs, but the key point is that the trade-offs need to be given in precise terms.

Finally, the military should pay special attention to helping decision-makers become proficient in their roles as appraisers of value, particularly with respect to decisions focused on whose life to risk, when, and for what objective. In the command-and-control paradigm of the future, decision-makers will likely be required to document such trade-offs in explicit forms so machines can understand them (e.g., I recognize there is a 12 percent chance that you wont survive this mission, but I judge the value of the target to be worth the risk).

If decision-makers at the tactical, operational, or strategic levels are not aware of or are unwilling to pay these ethical costs, then the construct of machine-aided choice will collapse. It will either collapse because machines cannot assist human choice without explicit trade-offs, or because decision-makers and their institutions will be ethically compromised by allowing machines to obscure the tradeoffs implied by their value models. Neither are acceptable outcomes. Rather, as an institution, the military should embrace the requisite transparency that comes with the responsibility to make enumerated judgements about life and death. Paradoxically, documenting risk tolerance and value assignment may serve to increase subordinate autonomy during conflict. A major advantage of formally modeling a decision-makers value trade-offs is that it allows subordinates and potentially even autonomous machines to take action in the absence of the decision-maker. This machine-aided decision process enables decentralized execution at scale that reflects the leaders values better than even the most carefully crafted rules of engagement or commanders intent. As long as trade-offs can be tied back to a decision-maker, then ethical responsibility lies with that decision-maker.

Keeping Values Preeminent

The Electronic Numerical Integrator and Computer, now an artifact of history, was the top secret that the New York Times revealed in 1946. Though important as a machine in its own right, the computers true significance lay in its symbolism. It represented the capacity for technology to sprint ahead of decision-makers, and occasionally pull them where they did not want to go.

The military should race ahead with investment in machine learning, but with a keen eye on the primacy of commander values. If the U.S. military wishes to keep pace with China and Russia on this issue, it cannot afford to delay in developing machines designed to execute the complicated but unobjectionable components of decision-making identifying alternatives, outcomes, and probabilities. Likewise, if it wishes to maintain its moral standing in this algorithmic arms race, it should ensure that value trade-offs remain the responsibility of commanders. The U.S. militarys professional development education should also begin training decision-makers on how to most effectively maintain accountability for the straightforward but vexing components of value judgements in conflict.

We stand encouraged by the continued debate and hard discussions on how to best leverage the incredible advancement in AI, machine learning, computer vision, and like technologies to unleash the militarys most valuable weapon system, the men and women who serve in uniform. The military should take steps now to ensure that those people and their values remain the key players in warfare.

Brad DeWees is a major in the U.S. Air Force and a tactical air control party officer. He is currently the deputy chief of staff for 9th Air Force (Air Forces Central). An alumnus of the Air Force Chief of Staffs Strategic Ph.D. program, he holds a Ph.D. in decision science from Harvard University. LinkedIn.

Chris FIAT Umphres is a major in the U.S. Air Force and an F-35A pilot. An alumnus of the Air Force Chief of Staffs Strategic Ph.D. program, he holds a Ph.D. in decision science from Harvard University and a Masters in management science and engineering from Stanford University. LinkedIn.

Maddy Tung is a second lieutenant in the U.S. Air Force and an information operations officer. A Rhodes Scholar, she is completing dual degrees at the University of Oxford. She recently completed an M.Sc. in computer science and began the M.Sc. in social science of the internet. LinkedIn.

The views expressed here are the authors alone and do not necessarily reflect those of the U.S. government or any part thereof.

Image: U.S. Air Force (Photo by Staff Sgt. Sean Carnes)

See the article here:
Machine Learning and Life-and-Death Decisions on the Battlefield - War on the Rocks

Machine Learning Tool Gives Early Warning of Cardiac Issues or Blood Clots in COVID Patients – HospiMedica

A team of biomedical engineers and heart specialists have developed an algorithm that warns doctors several hours before hospitalized COVID-19 patients experience cardiac arrest or blood clots.

The COVID-HEART predictor developed using data from patients treated for COVID-19 by scientists at the Johns Hopkins University (JHU; Baltimore, MD, USA) can forecast cardiac arrest in COVID-19 patients with a median early warning time of 18 hours and predict blood clots three days in advance. The machine-learning algorithm was built with more than 100 clinical data points, demographic information and laboratory results obtained from the JH-CROWN registry that Johns Hopkins established to collect COVID data from every patient in the hospital system. The scientists also added other variables reported by doctors on Twitter and from other pre-print papers.

The team did not anticipate that electrocardiogram data would play a critical role in the prediction of blood clotting. But once it was added, ECG data became one of the most accurate indicators for the condition. The next step for the researchers is to develop the best method for setting up the technology in hospitals to aid with the care of COVID-19 patients.

Its an early warning system to predict in real time these two outcomes in hospitalized COVID patients, said senior author Natalia Trayanova, the Murray B. Sachs professor of Biomedical Engineering and a professor of medicine. The continuously updating predictor can help hospitals allocate the appropriate resources and proper interventions to attain the best outcomes for patients.

The COVID-HEART predictor tool could help in the rapid triage of COVID-19 patients in the clinical setting especially when resources are limited, said Allison Hays, associate professor of medicine in the Johns Hopkins University School of Medicine and the projects main clinical collaborator. This may have implications for the treatment and closer monitoring of Covid-19 patients to help prevent these poor outcomes.

Related Links:Johns Hopkins University

Read the rest here:
Machine Learning Tool Gives Early Warning of Cardiac Issues or Blood Clots in COVID Patients - HospiMedica

Machine learning in human resources: how it works & its real-world applications – iTMunch

According to research conducted by Glassdoor, on average, the entire interview process conducted by companies in the United Stated usually takes about 22.9 days and the same in Germany, France and the UK takes 4-9 days longer [1]. Another research by the Society for Human Resources that studied data from more than 275,000 members in 160 countries found that the average time taken to fill a position is 42 days [2]. Clearly, hiring is a time-consuming and tedious process. Groundbreaking technologies like cloud computing, big data, augmented reality, virtual reality, blockchain technology and the Internet of Things can play a key role in making this process move faster. Machine learning in human resources is one such technology that has made the recruitment process not just faster but more effective.

Machine learning (ML) is treated as a subset of artificial intelligence (AI). AI is a branch of computer science which deals with building smart machines that are capable of performing certain tasks that typically require human intelligence. Machine learning, by definition, is the study of algorithms that enhance itself automatically over time with more data and experience. It is the science of getting machines (computers) to learn how to think and act like humans. To improve the learnings of a machine learning algorithm, data is fed into it over time in the form of observations and real-world interactions.The algorithms of ML are built on models based on sample or training data to make predictions and decisions without being explicitly programmed to do so.

Machine learning in itself is not a new technology but its integration with the HR function of organizations has been gradual and only recently started to have an impact. In this blog, we talk about how machine learning has contributed in making HR processes easier, how it works and what are its real-world applications. Let us begin by learning about this concept in brief.

The HR departments responsibilities with regards to recruitment used to be gathering and screening resumes, reaching out to candidates that fit the job description, lining up interviews and sending offer letters. It also includes managing a new employees on-boarding process and taking care of the exit process of an employee that decides to leave. Today, the human resource department is about all of this and much more. The department is now also expected to be able to predict employee attrition and candidate success, and this is possible through AI and machine learning in HR.

The objective behind integrating machine learning in human resource processes is the identification and automation of repetitive, time consuming tasks to free up the HR staff. By automating these processes, they can devote more time and resources to other imperative strategic projects and actual human interactions with prospective employees. ML is capable of efficiently handling the following HR roles, tasks and functions:

SEE ALSO:The Role of AI and Machine Learning in Affiliate Marketing

An HR professional keeps track of who saw the job posting and the job portal on which the applicant saw the posting. They collect the CVs and resumes of all the applicants and come up with a way to categorize the data in those documents. Additionally, they schedule, standardize and streamline the entire interview process. Moreover, they keep track of the social media activities of applicants along with other relevant data. All of this data collected by the HR professional is fed into a machine learning HR software from the first day itself. Soon enough, HR analytics in machine learning begins analyzing the data fed to discover and display insights and patterns.

The opportunities of learning through insights provided by machine learning HR are endless. The software helps HR professionals discover things like which interviewer is better at identifying the right candidate and which job portal or job posting attracts more or quality applicants.

With HR analytics and machine learning, fine-tuning and personalization of training is possible which makes the training experience more relevant to the freshly hired employee. It helps in identifying knowledge gaps or loopholes in training early on. It can also become a useful resource for company-related FAQs and information like company policies, code of conduct, benefits and conflict resolution.

The best way to better understand how machine learning has made HR processes more efficient is by getting acquainted with the real world applications of this technology. Let us have a look at some applications below.

SEE ALSO:The Importance of Human Resources Analytics

Scheduling is generally a time-demanding task. It includes coordinating with candidates and scheduling interviews, enhancing the onboarding experience, calling the candidates for follow-ups, performance reviews, training, testing and answering the common HR queries. Automating these tedious processes is one of the first applications of machine learning in human resource. ML takes away the burden of these cumbersome tasks from the HR staff by streamlining and automating it which frees up their time to focus on bigger issues at hand.A few of the best recruitment scheduling software are Beamery, Yello and Avature.

Once an HR professional is informed about the kind of talent that is needed to be hired in a company, one challenge is letting this information out and attracting the right set of candidates that might be fit for the role. Huge amount of companies trust ML for this task. Renowned job search platforms like LinkedIn and Glassdoor use machine learning and intelligent algorithms to help HR professionals filter and find out the best suitable candidates for the job.

Machine learning in human resources is also used to track new and potential applicants as they come into the system. A study was conducted by Capterra to look at how the use of recruitment software or applicant tracking software helped recruiters. It found 75% of the recruiters they contacted used some form of recruitment or applicant tracking software with 94% agreeing that it improved their hiring process. It further found that just 5% of recruiters thought that using applicant tracking software had a negative impact on their company [3].

Using such software also gives the HR professional access to predictive analytics which helps them analyze if the person would be best suitable for the job and a good fit for the company. Some of the best applicant tracking software that are available in the market are Pinpoint, Greenhouse and ClearCompany.

If hiring an employee is difficult, retaining an employee is even more challenging. There are factors in a company that make an employee stay or move to their next job. A study which was conducted by Gallup asked employees from different organizations if theyd leave or stay if certain perks were provided to them. The study found that 37% would quit their present job and take up a new job thatll allow them to work remotely part-time. 54% would switch for monetary bonuses, 51% for flexible working hours and 51% for employers offering retirement plans with pensions [4]. Though employee retention depends on various factors, it is imperative for an HR professional to understand, manage and predict employee attrition.

Machine learning HR tools provide valuable data and insights into the above mentioned factors and help HR professionals make decisions regarding employing someone (or not) more efficiently. By understanding this data about employee turnover, they are in a better position to take corrective measures well in advance to eliminate or minimize the issues.

An engaged employee is one who is involved in, committed to and enthusiastic about their work and workplace. The State of the Global Workforce report by Gallop found that 85% of the employees in the workplace are disengaged. Translation: Majority of the workforce views their workplace negatively or only does the bare minimum to get through the day, with little to no attachment to their work or workplace. The study further addresses why employee engagement is necessary. It found that offices with more engaged employees result in 10% higher customer metrics, 17% higher productivity, 20% more sales and 21% more profitability. Moreover, it found that highly engaged workplaces saw 41% less absenteeism [5].

Machine learning HR software helps the human resource department in making the employees more engaged. The insights provided by HR analytics by machine learning software help the HR team significantly in increasing employee productivity and reducing employee turnover rates. Software from Workometry and Glint aids immeasurable in measuring, analyzing and reporting on employee engagement and the general feeling towards their work.

The applications of machine learning in human resources we read above are already in use by HR professionals across the globe. Though the human element from human resources wont completely disappear, machine learning can guide and assist HR professionals substantially in ensuring the various functions of this department are well aligned and the strategic decisions made on a day-to-day basis are more accurate.

These are definitely exciting times for the HR industry and it is crucial that those working in this department are aware of the existing cutting-edge solutions available and the new trends that continue to develop.

The automation of HR functions like hiring & recruitment, training, development and retention has already made a profound positive effect on companies. Companies that refuse to or are slow to adapt and adopt machine learning and other new technologies will find themselves at a competitive disadvantage while those embrace them happily will flourish.

SEE ALSO:Future of Human Resource Management: HR Tech Trends of 2019

For more updates and latest tech news, keep reading iTMunch

Sources

[1] Glassdoor (2015) Why is Hiring Taking Longer, New Insights from Glassdoor Data [Online] Available from: https://www.glassdoor.com/research/app/uploads/sites/2/2015/06/GD_Report_3-2.pdf [Accessed December 2020]

[2] [Society for Human Resource Management (2016) 2016 Human Capital Benchmarking Report [Online] Available from: https://www.ebiinc.com/wp-content/uploads/attachments/2016-Human-Capital-Report.pdf [Accessed December 2020]

[3] Capterra (2015) Recruiting Software Impact Report [Online] Available from: https://www.capterra.com/recruiting-software/impact-of-recruiting-software-on-businesses [Accessed December 2020]

[4] Gallup (2017) State of the American Workplace Report [Online] Available from: https://www.gallup.com/workplace/238085/state-american-workplace-report-2017.aspx [Accessed December 2020]

[5] Gallup (2017) State of the Global Workplace [Online] Available from: https://www.gallup.com/workplace/238079/state-global-workplace-2017.aspx#formheader [Accessed December 2020]

Image Courtesy

Image 1: Background vector created by starline http://www.freepik.com

Image 2: Business photo created by yanalya http://www.freepik.com

Here is the original post:
Machine learning in human resources: how it works & its real-world applications - iTMunch