Archive for the ‘Machine Learning’ Category

U.S. Special Operations Command Employs AI and Machine Learning to Improve Operations – BroadbandBreakfast.com

December 11, 2020 In todays digital environment, winning wars requires more than boots on the ground. It also requires computer algorithms and artificial intelligence.

The United States Special Operations Command is currently playing a critical role advancing the employment of AI and machine learning in the fight against the countrys current and future advisories, through Project Maven.

To discuss the initiatives taking place as part of the project, General Richard Clarke, who currently serves as the Commander of USSOCOM, and Richard Shultz, who has served as a security consultant to various U.S. government agencies since the mid-1980s, joined the Hudson Institute for a virtual discussion on Monday.

Among other objectives, Project Maven aims to develop and integrate computer-vision algorithms needed to help military and civilian analysts encumbered by the sheer volume of full-motion video data that the Department of Defense collects every day in support of counterinsurgency and counter terrorism operation, according to Clarke.

When troops carry out militarized site exploration, or military raids, they bring back copious amounts of computers, papers, and hard drives, filled with potential evidence. In order to manage enormous quantities of information in real time to achieve strategic objectives, the Algorithmic Warfare Cross-Function task force, launched in April 2017, began utilizing AI to help.

We had to find a way to put all of this data into a common database, said Clarke. Over the last few years, humans were tasked with sorting through this content watching every video, and reading every detainee report. A human cannot sort and shift through this data quickly and deeply enough, he said.

AI and machine learning have demonstrated that algorithmic warfare can aid military operations.

Project Maven initiatives helped increase the frequency of raid operations from 20 raids a month to 300 raids a month, said Schultz. AI technology increases both the number of decisions that can be made, and the scale. Faster more effective decisions on your part, are going to give enemies more issues.

Project Maven initiatives have increased the accuracy of bomb targeting. Instead of hundreds of people working on these initiatives, today it is tens of people, said Clarke.

AI has also been used to rival adversary propaganda. I now spend over 70 percent of my time in the information environment. If we dont influence a population first, ISIS will get information out more quickly, said Clarke.

AI and machine learning tools, enable USSOCOM to understand what an enemy is sending and receiving, what are false narratives, what are bots, and more, the detection of which allows decision makers to make faster, and more accurate, calls.

Military use of machine learning for precision raids and bomb strikes naturally raises concerns. In 2018, more than 3,000 Google employees signed a petition in protest against the companys involvement with Project Maven.

In an open letter addressed to CEO Sundar Pichai, Google employees expressed concern that the U.S. military could weaponize AI and apply the technology towards refining drone strikes and other kinds of lethal attacks. We believe that Google should not be in the business of war, the letter read.

Visit link:
U.S. Special Operations Command Employs AI and Machine Learning to Improve Operations - BroadbandBreakfast.com

Which laws are significant? Applying machine learning to classify legislation – British Politics and Policy at LSE

Radoslaw Zubek, Abhishek Dasgupta, and David Doyle introduce a novel machine-learning approach to identifying important laws. They apply the new method to classify over 9,000 UK statutory instruments, and discuss the pros and cons of their approach.

Thousands of laws are published every year. In Britain, more than 300 public acts and almost 25,000 statutory instruments reached the statute book between 2010 and 2020. But which of these laws are really significant, and which ones are relatively minor? This is an important question for businesses and individuals. It is also one that many social scientists grapple with when studying law-making.

Conventional approach

The conventional approach is to ask experts lawyers, reporters or policy professionals. The recipe is simple: find a group of reputable experts and ask them to classify a set of laws into those they find notable and those which they do not; in the final step, combine such individual evaluations into a total score using some aggregation method.

This is a great approach which has been employed with some success. But it is not without its problems. For one, it is time consuming and labour intensive. Perhaps more importantly, it also struggles to ensure that experts apply the same concept of significance and that they give equal weight to both recent and older enactments. How can we improve on it?

Our novel approach

In our recent article, we offer a proof of concept for a novel approach which we think has important advantages with respect to increasedautomation, reproducibility, and minimisation of recall bias. Our method has two major steps.

In the first step, we harvest seed sets of significant laws from web data. A few billion people worldwide upload millions of posts every day on a myriad of issues including legislation. By posting content online, users signal which laws they consider significant. Also, many contributors, e.g., market analysts and law firms, are specialised domain experts. We take advantage of this propensity to freely share professional opinions.

In the second-step, we train a positive unlabeled (PU) learning algorithm. Recent advances in machine learning have offered sophisticated methods for building models when only positive examples are available, including two-step methods, biased two-class classifiers, and one-class classifiers. We employ PU learning to construct a computational formula that finds laws that are similar to our seeds (positives) within a large pool of unlabeled legislation.

Application: UK Statutory Instruments

We apply this approach to classify UK statutory instruments, the most common (and most plentiful) form of secondary legislation in the UK. In our application, we source examples of significant laws from the web pages of top-ranked UK law firms. Websites offer an attractive platform for law firms to demonstrate expertise within their practice areas. Regulatory updates drawing attention to important changes in legislation are a key part of these marketing activities. We perform an automated search of the websites of 288 leading law firms and obtain a set of 271 important instruments.

We train our model using an adapted version of an established two-step Rocchio-SVM method. Our training data consists of web-sourced positives and a set of all UK statutory instruments adopted between 2009 and 2016. To train the algorithm, we rely on two types of information: textual features obtained from explanatory notes and a battery of categorical features such as topic, department, and length.

A key test for our model is whether it is able successfully to predict outside the training data. We evaluate our approach in three ways. We first check if our model is able to predict future law citations on the web, and we find a high true positive rate of 85%. We then compare our automated classification with hand-coded ratings, and we achieve a fairly high accuracy of 70%. Finally, we examine how the share of laws we classify as significant varies over the annual legislative cycle in the UK and we find that our method produces estimates with high construct validity. All in all, we think our method shows good promise.

Pros and cons

Our approach has clear advantages. Automation saves time and labour, and enhances reproducibility of classifications. We can also be specific about our definition of significance. In our application, we show that lawyers post content online mainly about laws that change the regulatory status quo by a large margin. With our web-based approach, we are also able to minimise recall bias by focusing on contemporaneous evaluations that assess significance of laws around the time of their enactment.

Our method is not without its limitations, of course. As with any automated method, a trade-off exists between labeling expense and prediction accuracy, and our approach achieves moderate success in classifying more nuanced cases. We leave the task of further improving our model performance for future work.

_____________________

Note: the above draws on the authorspublished workinAmerican Political Science Review.

About the Authors

Radoslaw Zubek is Associate Professor in the Department of Politics and International Relations at the University of Oxford.

Abhishek Dasgupta is a Research Software Engineer in the Department of Computer Science at the University of Oxford.

David Doyle is Associate Professor in the Department of Politics and International Relations at the University of Oxford.

Photo by Fotis Fotopoulos on Unsplash.

See the rest here:
Which laws are significant? Applying machine learning to classify legislation - British Politics and Policy at LSE

U of Texas will stop using controversial algorithm to evaluate Ph.D. applicants – Inside Higher Ed

In 2013, the University of Texas at Austins computer science department began usinga machine-learning system called GRADE to help make decisions about who gets into its Ph.D. program -- and who doesnt. This year, the department abandoned it.

Before the announcement, which the department released in the form of a tweet reply, few had even heard of the program. Now, its critics -- concerned about diversity, equity and fairness in admissions -- say it should never have been used in the first place.

Humans code these systems. Humans are encoding their own biases into these algorithms, said Yasmeen Musthafa, a Ph.D. student in plasma physics at the University of California, Irvine, who rang alarm bells about the system on Twitter. What would UT Austin CS department have looked like without GRADE? Well never know.

GRADE (which stands for GRaduate ADmissions Evaluator) was created by a UT faculty member and UT graduate student in computer science, originally to help the graduate admissions committee in the department save time. GRADE predicts how likely the admissions committee is to approve an applicant and expresses that prediction as a numerical score out of five. The system also explains what factors most impacted its decision.

The UT researchers who made GRADE trained it on a database of past admissions decisions. The system uses patterns from those decisions to calculate its scores for candidates.

For example, letters of recommendation containing the words best, award, research or Ph.D. are predictive of admission -- and can lead to a higher score -- while letters containing the words good, class, programming or technology are predictive of rejection. A higher grade point average means an applicant is more likely to be accepted, as does the name of an elite college or university on the rsum. Within the system, institutions were encoded into the categories elite, good and other, based on a survey of UT computer science faculty.

Every application GRADE scored during the seven years it was in use was still reviewed by at least one human committee member, UT Austin has said, but sometimes only one. Before GRADE, faculty members made multiple review passes over the pool. The system saved the committee time, according to its developers, by allowing faculty to focus on applicants on the cusp of admission or rejection and review applicants in descending order of quality.

For what its worth, GRADE did appear to successfully save the committee time. In the 2012 and 2013 application seasons, developers said in a paper about their work, it reduced the number of full reviews per candidate by 71percent and cut the total time reviewing files by 74percent. (One full review typically takes 10 to 30 minutes.) Between the years 2000 and 2012, applications to the computer science Ph.D. program grew from about 250 to nearly 650, though the number of faculty able to review those applications remained mostly constant. In the years since 2012, the number of applications has reached over 1,200.

The universitys use of the technology escaped attention for a number of years, until this month, when the physics department at the University of Maryland at College Park held a colloquium talk with the two creators of GRADE.

The talk gained attention on Twitter as graduate students accused GRADEs creators of further disadvantaging underrepresented groups in the computer science admissions process.

We put letters of recommendation in to try to lift people up who have maybe not great GPAs. We put a personal statement in the graduate application process to try to give marginalized folks a chance to have their voice heard, said Musthafa, who is also a member of the Physics and Astronomy Anti-Racism Coalition. The worst part about GRADE is that it throws that out completely.

Advocates have long been concerned about the potential for human biases to be baked into or exacerbated by machine-learning algorithms. Algorithms are trained on data. When it comes to people, what those data look like is a result of historical inequity. Preferences for one type of person over another are often the result of conscious or unconscious bias.

That hasnt stopped institutions from using machine-learning systems in hiring, policing and prison sentencing for a number of years now, often to great controversy.

Every process is going to make some mistakes. The question is, where are those mistakes likely to be made and who is likely to suffer as a result of them? said Manish Raghavan, a computer science Ph.D. candidate at Cornell University who has researched and written about bias in algorithms. Likely those from underrepresented groups or people who dont have the resources to be attending elite institutions.

Though many women and people who are Black and Latinx have had successful careers in computer science, those groups are underrepresented in the field at large. In 2017, whites, Asians and nonresident aliens received 84percent of degrees awarded for computer science in the United States.

At UT, nearly 80percent of undergraduates in computer science in 2017 were men.

Raghavan said he was surprised that there appeared to be no effort to audit the impacts of GRADE, such as how scores differ across demographic groups.

GRADEs creators have said that the system is only programmed to replicate what the admissions committee was doing prior to 2013, not to make better decisions than humans could. The system isnt programmed to use race or gender to make its predictions, theyve said. In fact, when given those features as options to help make its predictions, it chooses to give them zero weight. GRADEs creators have said this is evidence that the committees decisions are gender and race neutral.

Detractors have countered this, arguing that race and gender can be encoded into other features of the application that the system uses. Womens colleges and historically Black universities may be undervalued by the algorithm, theyve said. Letters of recommendation are known to reflect gender bias, as recommenders are more likely to describe female students as caring rather than assertive or trailblazing.

In the Maryland talk, faculty raised their own concerns. What a committee is looking for might change each year. Letters of recommendation and personal statements should be thoughtfully considered, not turned into a bag of words, they said.

Im kind of shocked you did this experiment on your students, Steve Rolston, chair of the physics department at Maryland, said during the talk. You seem to have built a model that builds in whatever bias your committee had in 2013 and youve been using it ever since.

In an interview, Rolston said graduate admissions can certainly be a challenge. His department receives over 800 graduate applications per year, which takes a good deal of time for faculty to evaluate. But, he said, his department would never use a tool like this.

If I ask you to do a classifier of images and youre looking for dogs, I can check afterwards that, yes, it did correctly identify dogs, he said. But when Im asking for decisions about people, whether it's graduate admissions, or hiring or prison sentencing, theres no obvious correct answer. You train it, but you dont know what the result is really telling you.

Rolston said having at least one faculty member review each application was not a convincing safeguard.

If I give you a file and say, Well, the algorithm said this person shouldnt be accepted, that will inevitably bias the way you look at it, he said.

UT Austin has said GRADE was used to organize admissions decisions, rather than make them.

"It was never used to make decisions to admit or reject prospective students, asat least one faculty member directly evaluates applicants at each stage of the review process," a spokesperson for the Graduate School said via email.

Despite the criticism around diversity and equity, UT Austin has said GRADE is being phased out because it is too difficult to maintain.

Changes in the data and software environment made the system increasingly difficult to maintain, and its use was discontinued, the spokesperson said via email. The Graduate School works with graduate programs and faculty members across campus to promote holistic application review and reduce bias in admissions decisions.

For Musthafa, the fact that GRADE may be gone for good does not impact the existing inequity in graduate admissions.

The entire system is steeped in racism, sexism and ableism, they said. How many years of POC computer science students got denied [because of this]?

Addressing that inequity -- as well as the competitiveness that led to the creation of GRADE -- may mean expanding committees, paying people for their time and giving Black and Latinx graduate students a voice in those decisions, they said. But automating cannot be part of that decision making.

If we automate this to any extent, its just going to lock people out of academia, Musthafa said. The racism of today is being immortalized in the algorithms of tomorrow.

Continued here:
U of Texas will stop using controversial algorithm to evaluate Ph.D. applicants - Inside Higher Ed

Information gathering: A WebEx talk on machine learning – Santa Fe New Mexican

Were long past the point of questioning whether machines can learn. The question now is how do they learn? Machine learning, a subset of artificial intelligence, is the study of computer algorithms that improve automatically through experience. That means a machine can learn, independent of human programming. Los Alamos National Laboratory staff scientist Nga Thi Thuy Nguyen-Fotiadis is an expert on machine learning, and at 5:30 p.m. on Monday, Dec. 14, she hosts the virtual presentation Deep focus: Techniques for image recognition in machine learning, as part of the Bradbury Science Museums (1350 Central Ave., Los Alamos, 505-667-4444, lanl.gov/museum) Science on Tap lecture series. Nguyen-Fotiadis is a member of LANLs Information Sciences Group, whose Computer, Computational, and Statistical Sciences division studies fields that are central to scientific discovery and innovation. Learn about the differences between LANLs Trinity supercomputer and the human brain, and how algorithms determine recommendations for your nightly viewing pleasure on Netflix and the like. The talk is a free WebEx virtual event. Follow the link from the Bradburys event page at lanl.gov/museum/events/calendar/2020/12 /calendar-sot-nguyen-fotaidis.php to register.

Read this article:
Information gathering: A WebEx talk on machine learning - Santa Fe New Mexican

LeanTaaS Raises $130 Million to Strengthen Its Machine Learning Software Platform to Continue Helping Hospitals Achieve Operational Excellence -…

SANTA CLARA, Calif.--(BUSINESS WIRE)--LeanTaaS, Inc., a Silicon Valley software innovator that increases patient access and transforms operational performance for healthcare providers, announced a $130 million Series D funding round led by Insight Partners with participation from Goldman Sachs. The funds will be used to invest in building out the existing suite of products (iQueue for Operating Rooms, iQueue for Infusion Centers and iQueue for Inpatient Beds) as well as scaling the engineering, product and GoToMarket teams, and expanding the iQueue platform to include new products.

LeanTaaS is uniquely positioned to help hospitals and health systems across the country face the mounting operational and financial pressures exacerbated by the coronavirus. This funding will allow us to continue to grow and expand our impact while helping healthcare organizations deliver better care at a lower cost, said Mohan Giridharadas, founder and CEO of LeanTaaS. Our company momentum over the past several years - including greater than 50% revenue growth in 2020 and negative churn despite a difficult macro environment - reflects the increasing demand for scalable predictive analytics solutions that optimize how health systems increase operational utilization and efficiency. It also highlights how weve been able to develop and maintain deep partnerships with 100+ health systems and 300+ hospitals in order to keep them resilient and agile in the face of uncertain demand and supply conditions.

With this investment, LeanTaaS has raised more than $250 million in aggregate, including more than $150 million from Insight Partners. As part of the transaction, Insight Partners Jeff Horing and Jon Rosenbaum and Goldman Sachs Antoine Munfa will join LeanTaaS Board of Directors.

Healthcare operations in the U.S. are increasingly complex and under immense pressure to innovate; this has only been exacerbated by the prioritization of unique demands from the current pandemic, said Jeff Horing, co-founder and Managing Director at Insight Partners. Even under these unprecedented circumstances, LeanTaaS has demonstrated the effectiveness of its ML-driven platform in optimizing how hospitals and health systems manage expensive, scarce resources like infusion center chairs, operating rooms, and inpatient beds. After leading the companys Series B and C rounds, we have formed a deep partnership with Mohan and team. We look forward to continuing to help LeanTaaS scale its market presence and customer impact.

Although health systems across the country have invested in cutting-edge medical equipment and infrastructure, they cannot maximize the use of such assets and increase operational efficiencies to improve their bottom lines with human based scheduling or unsophisticated tools. LeanTaaS develops specialized software that increases patient access to medical care by optimizing how health systems schedule and allocate the use of expensive, constrained resources. By using LeanTaaS product solutions, healthcare systems can harness the power of sophisticated, AI/ML-driven software to improve operational efficiencies, increase access, and reduce costs.

We continue to be impressed by the LeanTaaS team. As hospitals and health systems begin to look toward a post-COVID-19 world, the agility and resilience LeanTaaS solutions provide will be key to restoring and growing their operations, said Antoine Munfa, Managing Director of Goldman Sachs Growth.

LeanTaaS solutions have now been deployed in more than 300 hospitals across the U.S., including five of the 10 largest health networks and 12 of the top 20 hospitals in the U.S. according to U.S. News & World Report. These hospitals use the iQueue platform to optimize capacity utilization in infusion centers, operating rooms, and inpatient beds. iQueue for Infusion Centers is used by 7,500+ chairs across 300+ infusion centers including 70 percent of the National Comprehensive Cancer Network and more than 50 percent of National Cancer Institute hospitals. iQueue for Operating Rooms is used by more than 1,750 ORs across 34 health systems to perform more surgical cases during business hours, increase competitiveness in the marketplace, and improve the patient experience.

I am excited about LeanTaaS' continued growth and market validation. As healthcare moves into the digital age, iQueue overcomes the inherent deficiencies in capacity planning and optimization found in EHRs. We are very excited to partner with LeanTaaS and implement iQueue for Operating Rooms, said Dr. Rob Ferguson, System Medical Director, Surgical Operations, Intermountain Healthcare.

Concurrent with the funding, LeanTaaS announced that Niloy Sanyal, the former CMO at Omnicell and GE Digital, would be joining as its new Chief Marketing Officer. Also, Sanjeev Agrawal has been designated as LeanTaaS Chief Operating Officer in addition to his current role as the President. "We are excited to welcome Niloy to LeanTaaS. His breadth and depth of experience will help us accelerate our growth as the industry evolves to a more data driven way of making decisions," said Agrawal.

About LeanTaaSLeanTaaS provides software solutions that combine lean principles, predictive analytics, and machine learning to transform hospital and infusion center operations. The companys software is being used by over 100 health systems across the nation which all rely on the iQueue cloud-based solutions to increase patient access, decrease wait times, reduce healthcare delivery costs, and improve revenue. LeanTaaS is based in Santa Clara, California, and Charlotte, North Carolina. For more information about LeanTaaS, please visit https://leantaas.com/, and connect on Twitter, Facebook and LinkedIn.

About Insight PartnersInsight Partners is a leading global venture capital and private equity firm investing in high-growth technology and software ScaleUp companies that are driving transformative change in their industries. Founded in 1995, Insight Partners has invested in more than 400 companies worldwide and has raised through a series of funds more than $30 billion in capital commitments. Insights mission is to find, fund, and work successfully with visionary executives, providing them with practical, hands-on software expertise to foster long-term success. Across its people and its portfolio, Insight encourages a culture around a belief that ScaleUp companies and growth create opportunity for all. For more information on Insight and all its investments, visit insightpartners.com or follow us on Twitter @insightpartners.

About Goldman Sachs GrowthFounded in 1869, The Goldman Sachs Group, Inc. is a leading global investment banking, securities and investment management firm. Goldman Sachs Merchant Banking Division (MBD) is the primary center for the firms long-term principal investing activity. As part of MBD, Goldman Sachs Growth is the dedicated growth equity team within Goldman Sachs, with over 25 years of investing history, over $8 billion of assets under management, and 9 offices globally.

LeanTaaS and iQueue are trademarks of LeanTaaS. All other brand names and product names are trademarks or registered trademarks of their respective companies.

Read more:
LeanTaaS Raises $130 Million to Strengthen Its Machine Learning Software Platform to Continue Helping Hospitals Achieve Operational Excellence -...