Archive for the ‘Machine Learning’ Category

AI: This COVID machine-learning tool helps swamped hospitals pick the right treatment – ZDNet

Spain has been one the European states worst hit by the COVID-19 pandemic, with more than 1.7 million detected cases. Despite the second wave of infections that has hit the country over the past few months, the Hospital Clinic in Barcelona has succeeded in halving mortality among its coronavirus patients using artificial intelligence.

The Catalan hospital has developed a machine-learning tool that can predict when a COVID patient will deteriorate and how to customize that individual's treatment to avoid the worst outcome.

"When you have a sole patient who's in a critical state, you can take special care of them. But when they are 700 of them, you need this kind of tool," says Carol Garcia-Vidal, a physician specialized in infectious diseases and IDIBAPS researcher who has led the development of the tool.

SEE: Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation (TechRepublic Premium)

Before the pandemic, the hospital had already been working on software to turn variable data into an analyzable form. So when the hospital started to receive COVID patients in March, it put the system to work analyzing three trillion pieces of structured and anonymized data from 2,000 patients.

The goal was to train it to recognize patterns and check what treatments were the most effective for each patient and when they should be administered.

That work underlined to Garcia-Vidal and her team that the virus doesn't manifest itself in the same way for everyone. "There are patients with an inflammatory response, patients with coagulopathies and patients who develop super infections," Garca-Vidal tells ZDNet. Each group needs different drugs and thus a personalized treatment.

Thanks to an EIT Health grant, the AI system has been developed into a real-time dashboard display on physicians' computers that has become one of their everyday tools. Under the supervision of an epidemiologist, the tool enables patients to be classified and offered a more personalized treatment.

"Nobody has done this before," says Garca-Vidal, who says the researchers recently added two more patterns to the system to include the patients who are stable and can leave the hospital, thus freeing a bed, and those patients who are more likely to die. The predictions are 90% accurate.

"It's very useful for physicians with less experience and those who have a specialty that's nothing to do with COVID, such as gynecologists or traumatologists," she says. As in many countries, doctors from all specialist areas were called in to treat patients during the first wave of the pandemic.

The system is also being used during the current second wave because, according to Garca-Vidal, the number of patients in intensive care in Catalan hospitals has jumped. The plan is to make the tool available to other hospitals.

Meanwhile, the Barcelona Supercomputing Center (BSC) is also analyzing a set of data corresponding to 3,000 medical cases generated by the Hospital Clnic during the acute phase of the pandemic in March.

The aim is to develop a model based on deep-learning neural networks that will look for common patterns and generate predictions on the evolution of symptoms. The objective is to know whether a patient is likely to need a ventilator system or be directly sent to intensive care.

SEE: The algorithms are watching us, but who is watching the algorithms?

Some data such as age, sex, vital signs and medication given is structured but other data isn't, because it consists of text written in natural language in the form of, for example, hospital discharge and radiology reports, BSC researcher Marta Villegas explains.

Supercomputing brings the computational capacity and power to extract essential information from these reports and train models based on neural networks to predict the evolution of the disease as well as the response to treatments given the previous conditions of the patients.

This approach, based on natural language processing, is also being tested at a hospital in Madrid.

See original here:
AI: This COVID machine-learning tool helps swamped hospitals pick the right treatment - ZDNet

4 tips to upgrade your programmatic advertising with Machine Learning – Customer Think

Lomit Patel, VP of growth at IMVU and best-selling author of Lean AI, shares lessons learned and practical advice for app marketers to unlock open budgets and sustainable growth with machine learning.

The first step in the automation journey is to identify where you and your team stand. In his book Lean AI: How Innovative Startups Use Artificial Intelligence to Grow, Lomit introduces the Lean AI Autonomy Scale, which ranks companies from 0 to 5 based on their level of AI & automation adoption.

A lot of companies arent fully relying on AI and automation to power their growth strategies. In fact, on a Lean AI Autonomy Scale from 0 to 5, most companies are at stage 2 or 3, where they rely on the AI of some of their partners without fully garnering the potential of these tools.

Heres how app marketers can start working their way up to level 5:

Put your performance strategy to the test by setting the right indicators. Marketers KPIs should be geared towards measuring growth. Identify the metrics that show whats driving more user quality conversions and revenue, such as:

Analyzing data is a critical step towards measuring success through the right KPIs. When getting data ready to be automated and processed with AI, marketers should make sure:

The better the data, the more effective decisions it will allow you to take. By aggregating data, marketers gain a comprehensive view of their efforts, which in turn leads to a better understanding of success metrics.

Youve got to make sure that youre giving them [partners] the right data so that their algorithms can optimize towards your outcomes and clearly define what success is. Lomit Patel.

The role of AI is not to replace jobs or people, but to replace tasks that people do, letting them focus on the things they are good at.

With Lean AI, the machine does a lot of the heavy lifting, allowing marketers to process data and surface insights in a way that wasnt possible beforeand with more data, the accuracy rate continues to go up.

It can be used to:

With our AI machine, were constantly testing different audiences, creatives, bids, budgets, and moving all of those different dials. On average, were generally running about ten thousand experiments at scale. A majority of those are based on creatives, its become a much bigger lever for us. Lomit Patel.

Theres a reason why growth partners have been around for a long time. For a lot of companies, the hassle of taking all marketing operations in-house doesnt make sense. At first, building a huge in-house data science team might seem like a great way to start leveraging AIbut:

Performance partners bring experience from working with multiple players across a number of verticals, making it easier to identify and implement the most effective automation strategy for each marketer. Their knowledge about industry benchmarks and best practices goes a long way in helping marketers outscore their competitors.

Last but not least, once you find the right partners, set them up for success by sharing the right data.

These recommendations are the takeaways from the first episode of App Marketers Unplugged. Created by Jampp, this video podcast series connects industry leaders and influencers to discuss challenges and trends with their peers.

Watch the full App Marketers Unplugged session with Lomit Patel to learn more about how Lean AI can help you gain users insights more efficiently and what marketers need to sail through the automation journey.

Read more from the original source:
4 tips to upgrade your programmatic advertising with Machine Learning - Customer Think

Tookitaki Recognised for Innovative Use of AI & Machine Learning – Regulation Asia

Singapore/Hong Kong, 15 December 2020 Tookitaki has won the Regtech Award for AI & Machine Learning, and was highly commended in the solutions category for AML/CTF Compliance, in the 3rd Regulation Asia Awards for Excellence 2020 in an online ceremony on 15 December 2020.

Tookitaki has developed a Typology Repository Management (TRM) solution, which provides a new way of detecting money laundering through collective intelligence and continuous learning. TRM complements Tookitakis automated machine learning approach, which builds detection models based on historical learnings and nuances within the given universe of data.

The approach represents a move away from financial institutions having to manually hard code typologies into traditional transaction monitoring solutions, which today takes significant time, effort and investment to implement. Tookitakis TRM provides access to typologies from regulators, financial institutions, NGOs and other bodies in machine readable format, effectively creating thousands of risk indicators and establishing a federated ecosystem where firms can generate individual learnings in a decentralised way.

During the Covid-19 pandemic, while most financial institutions generated a large number of false positives as a result of high volumes of ATM withdrawals, Tookitakis systems were instead able to recognise similar behaviours occurring across customer groups and update its models accordingly, thereby avoiding ATM activity being flagged as suspicious activity and reducing the manual efforts that would be required to investigate and close each case.

Todays AML monitoring solutions are largely rules-based. Even when machine learning techniques are used, they are often insufficient to adapt to changing market needs and customer behaviour, said one judge on the awards panel. Tookitakis approach fundamentally changes the way machine learning is applied to detecting financial crime. Its machine learning models are fully explainable, sensitive data is protected, and it does away with the need to custom build typologies into monitoring systems. This will be a game-changer in the fight against financial crime.

In the highly competitive AML/CTF Compliance category, Tookitaki was highly commended for its transaction monitoring and AML analytics solutions, which leverage on its typology repository and automated machine learning to generate high quality alerts for investigation. One of Singapores largest banks, UOB, in December announced it deployed Tookitakis AML solution to screen all UOB accounts globally, having fully tested the solution over more than two years.

In October, Tookitaki was announced to be the winner in the monitoring and surveillance category at the G20 TechSprint challenge, for its cryptocurrency AML typology repository management solution.

About the Regulation Asia Awards for Excellence 2020

The Regulation Asia Awards for Excellence recognises financial institutions, technology companies, legal and consulting firms, exchanges and other players that have helped meet the challenges of the ever-changing and increasingly complex regulatory landscape in Asia Pacific. Each year, submissions are diligently evaluated and award winners selected by a panel of industry experts serving as judges.

The full list of award winners is available here.

About Regulation Asia

Regulation Asia is the leading source for actionable regulatory intelligence for Asia Pacific markets. With over 8,500 subscribers, including regulatory bodies, exchanges, banks, asset managers and service providers, Regulation Asia plays a key role in shaping the regulatory agenda.

Visitwww.regulationasia.comor connect viaLinkedInorTwitter.

Excerpt from:
Tookitaki Recognised for Innovative Use of AI & Machine Learning - Regulation Asia

U.S. Special Operations Command Employs AI and Machine Learning to Improve Operations – BroadbandBreakfast.com

December 11, 2020 In todays digital environment, winning wars requires more than boots on the ground. It also requires computer algorithms and artificial intelligence.

The United States Special Operations Command is currently playing a critical role advancing the employment of AI and machine learning in the fight against the countrys current and future advisories, through Project Maven.

To discuss the initiatives taking place as part of the project, General Richard Clarke, who currently serves as the Commander of USSOCOM, and Richard Shultz, who has served as a security consultant to various U.S. government agencies since the mid-1980s, joined the Hudson Institute for a virtual discussion on Monday.

Among other objectives, Project Maven aims to develop and integrate computer-vision algorithms needed to help military and civilian analysts encumbered by the sheer volume of full-motion video data that the Department of Defense collects every day in support of counterinsurgency and counter terrorism operation, according to Clarke.

When troops carry out militarized site exploration, or military raids, they bring back copious amounts of computers, papers, and hard drives, filled with potential evidence. In order to manage enormous quantities of information in real time to achieve strategic objectives, the Algorithmic Warfare Cross-Function task force, launched in April 2017, began utilizing AI to help.

We had to find a way to put all of this data into a common database, said Clarke. Over the last few years, humans were tasked with sorting through this content watching every video, and reading every detainee report. A human cannot sort and shift through this data quickly and deeply enough, he said.

AI and machine learning have demonstrated that algorithmic warfare can aid military operations.

Project Maven initiatives helped increase the frequency of raid operations from 20 raids a month to 300 raids a month, said Schultz. AI technology increases both the number of decisions that can be made, and the scale. Faster more effective decisions on your part, are going to give enemies more issues.

Project Maven initiatives have increased the accuracy of bomb targeting. Instead of hundreds of people working on these initiatives, today it is tens of people, said Clarke.

AI has also been used to rival adversary propaganda. I now spend over 70 percent of my time in the information environment. If we dont influence a population first, ISIS will get information out more quickly, said Clarke.

AI and machine learning tools, enable USSOCOM to understand what an enemy is sending and receiving, what are false narratives, what are bots, and more, the detection of which allows decision makers to make faster, and more accurate, calls.

Military use of machine learning for precision raids and bomb strikes naturally raises concerns. In 2018, more than 3,000 Google employees signed a petition in protest against the companys involvement with Project Maven.

In an open letter addressed to CEO Sundar Pichai, Google employees expressed concern that the U.S. military could weaponize AI and apply the technology towards refining drone strikes and other kinds of lethal attacks. We believe that Google should not be in the business of war, the letter read.

Visit link:
U.S. Special Operations Command Employs AI and Machine Learning to Improve Operations - BroadbandBreakfast.com

Which laws are significant? Applying machine learning to classify legislation – British Politics and Policy at LSE

Radoslaw Zubek, Abhishek Dasgupta, and David Doyle introduce a novel machine-learning approach to identifying important laws. They apply the new method to classify over 9,000 UK statutory instruments, and discuss the pros and cons of their approach.

Thousands of laws are published every year. In Britain, more than 300 public acts and almost 25,000 statutory instruments reached the statute book between 2010 and 2020. But which of these laws are really significant, and which ones are relatively minor? This is an important question for businesses and individuals. It is also one that many social scientists grapple with when studying law-making.

Conventional approach

The conventional approach is to ask experts lawyers, reporters or policy professionals. The recipe is simple: find a group of reputable experts and ask them to classify a set of laws into those they find notable and those which they do not; in the final step, combine such individual evaluations into a total score using some aggregation method.

This is a great approach which has been employed with some success. But it is not without its problems. For one, it is time consuming and labour intensive. Perhaps more importantly, it also struggles to ensure that experts apply the same concept of significance and that they give equal weight to both recent and older enactments. How can we improve on it?

Our novel approach

In our recent article, we offer a proof of concept for a novel approach which we think has important advantages with respect to increasedautomation, reproducibility, and minimisation of recall bias. Our method has two major steps.

In the first step, we harvest seed sets of significant laws from web data. A few billion people worldwide upload millions of posts every day on a myriad of issues including legislation. By posting content online, users signal which laws they consider significant. Also, many contributors, e.g., market analysts and law firms, are specialised domain experts. We take advantage of this propensity to freely share professional opinions.

In the second-step, we train a positive unlabeled (PU) learning algorithm. Recent advances in machine learning have offered sophisticated methods for building models when only positive examples are available, including two-step methods, biased two-class classifiers, and one-class classifiers. We employ PU learning to construct a computational formula that finds laws that are similar to our seeds (positives) within a large pool of unlabeled legislation.

Application: UK Statutory Instruments

We apply this approach to classify UK statutory instruments, the most common (and most plentiful) form of secondary legislation in the UK. In our application, we source examples of significant laws from the web pages of top-ranked UK law firms. Websites offer an attractive platform for law firms to demonstrate expertise within their practice areas. Regulatory updates drawing attention to important changes in legislation are a key part of these marketing activities. We perform an automated search of the websites of 288 leading law firms and obtain a set of 271 important instruments.

We train our model using an adapted version of an established two-step Rocchio-SVM method. Our training data consists of web-sourced positives and a set of all UK statutory instruments adopted between 2009 and 2016. To train the algorithm, we rely on two types of information: textual features obtained from explanatory notes and a battery of categorical features such as topic, department, and length.

A key test for our model is whether it is able successfully to predict outside the training data. We evaluate our approach in three ways. We first check if our model is able to predict future law citations on the web, and we find a high true positive rate of 85%. We then compare our automated classification with hand-coded ratings, and we achieve a fairly high accuracy of 70%. Finally, we examine how the share of laws we classify as significant varies over the annual legislative cycle in the UK and we find that our method produces estimates with high construct validity. All in all, we think our method shows good promise.

Pros and cons

Our approach has clear advantages. Automation saves time and labour, and enhances reproducibility of classifications. We can also be specific about our definition of significance. In our application, we show that lawyers post content online mainly about laws that change the regulatory status quo by a large margin. With our web-based approach, we are also able to minimise recall bias by focusing on contemporaneous evaluations that assess significance of laws around the time of their enactment.

Our method is not without its limitations, of course. As with any automated method, a trade-off exists between labeling expense and prediction accuracy, and our approach achieves moderate success in classifying more nuanced cases. We leave the task of further improving our model performance for future work.

_____________________

Note: the above draws on the authorspublished workinAmerican Political Science Review.

About the Authors

Radoslaw Zubek is Associate Professor in the Department of Politics and International Relations at the University of Oxford.

Abhishek Dasgupta is a Research Software Engineer in the Department of Computer Science at the University of Oxford.

David Doyle is Associate Professor in the Department of Politics and International Relations at the University of Oxford.

Photo by Fotis Fotopoulos on Unsplash.

See the rest here:
Which laws are significant? Applying machine learning to classify legislation - British Politics and Policy at LSE