Archive for the ‘Machine Learning’ Category

Discovery of aggressive cancer cell types by Vanderbilt researchers made possible with machine learning techniques – Vanderbilt University News

By applying unsupervised and automated machine learning techniques to the analysis of millions of cancer cells, Rebecca Ihrie and Jonathan Irish, both associate professors of cell and developmental biology, have identified new cancer cell types in brain tumors. Machine learning is a series of computer algorithms that can identify patterns within enormous quantities of data and get smarter with more experience. This finding holds the promise of enabling researchers to better understand and target these cell types for research and therapeutics for glioblastoma an aggressive brain tumor with high mortality as well as the broader applicability of machine learning to cancer research.

With their collaborators, Ihrie and Irish developed Risk Assessment Population IDentification (RAPID), an open-source machine learning algorithm that revealed coordinated patterns of protein expression and modification associated with survival outcomes.

The article, Unsupervised machine learning reveals risk stratifying glioblastoma tumor cells was published online in the journal eLife on June 23. RAPID code and examples are available on the cytolab Github page.

For the past decade, the research community has been working to leverage machine learnings ability to absorb and analyze more data for cancer cell research than the human mind alone can process. Without any human oversight, RAPID combed through 2 million tumor cells with at least 4,710 glioblastoma cells from each patient from 28 glioblastomas, flagging the most unusual cells and patterns for us to look into, said Ihrie. Were able to find the needles in the haystack without searching the entire haystack. This technology lets us devote our attention to better understanding the most dangerous cancer cells and to get closer to ultimately curing brain cancer.

Fed into RAPID were data on cellular proteins that govern the identity and function of neural stem cells and other brain cells. The data type used is called single-cell mass cytometry, a measurement technique typically applied to blood cancer. Once RAPIDs statistical analysis was complete and the needles in the haystack were found, only those cells were studied. One of the most exciting results of our research is that unsupervised machine learning found the worst offender cells without needing the researchers to give it clinical or biological knowledge as context, said Irish, also scientific director of Vanderbilts Cancer & Immunology Core. The findings of this study currently represent the biggest biology advance from my lab at Vanderbilt.

The researchers machine learning analysis enabled their team to study multiple characteristics of the proteins in brain tumor cells in relation to other characteristics, delivering new and unexpected patterns. The collaboration between our two labs, the support that we received for this high-risk work from Vanderbilt and the Vanderbilt-Ingram Cancer Center (VICC) and the fruitful collaboration with neurosurgeons and pathologists who provided a unique opportunity to study human cells right out of the brain allowed us to achieve this milestone, said Ihrie and Irish in a joint statement. The co-first authors of the paper are former Vanderbilt graduate students Nalin Leelatian, a current neuropathology resident at Yale (Irish lab), and Justine Sinnaeve (Ihrie lab). Through her research and work on this topic, Leelatian earned the American Brain Tumor Association (ABTA) Scholar-in-Training Award, American Association for Cancer Research (AACR) in April 2017.

The applicability of this research extends beyond cancer research to data analysis techniques for broader human disease research and laboratory modeling of diseases using multiple samples. The paper also demonstrates that these complex patterns, once found, can be used to develop simpler classifications that can be applied to hundreds of samples. Researchers studying glioblastoma brain tumors will be able to refer to these findings as they test to see if their own samples are comparable to the cell and protein expression patterns discovered by Ihrie, Irish, and collaborators.

This work was supported by the Michael David Greene Brain Cancer Fund, a discovery grant for brain tumor research established in 2004. The grant was recently renewed for another five years to support Ihrie and Irishs continued research on glioblastoma. Additional support was provided by the National Institutes of Health, including the National Cancer Institute and National Institute of Neurological Disorders and Stroke, VICC and VICC Ambassadors, the Vanderbilt International Scholars program, a Vanderbilt University Discovery Grant, an Alpha Omega Alpha Postgraduate Award, a Society of Neurological Surgeons/RUNN Award, a Burroughs Wellcome Fund Physician-Scientist Institutional Award, the Vanderbilt Institute for Clinical and Translational Research, and the Southeastern Brain Tumor Foundation.

See the original post:
Discovery of aggressive cancer cell types by Vanderbilt researchers made possible with machine learning techniques - Vanderbilt University News

How Work Will Change Following the Pandemic – Stanford University News

Economists use the term hysteresis to describe the phenomenonthat, when conditions in an economy change, the effects of that change often remain even after the conditions return to normal.

COVID and its impact on the workforce may provide a good example of hysteresis, said HAI Distinguished Fellow and MIT professor Erik Brynjolfsson, who will join Stanford faculty in July 2020 as the director of the new Digital Economy Lab.

To keep workers safe and continue functioning, companies have ramped up remote work and are aggressively automating someoperations and exploring machine learning.

Some of these changes are going to be permanent, he said during Stanford HAIs recent online conference COVID+AI: The Road Ahead. The question is, what parts of the economy are going to be most affected by the adoption of these technologies, andwhich parts will be less affected?

Brynjolfsson worked with Carnegie Mellon professor Tom Mitchell, MIT postdoc Daniel Rock, and others on a series of papers identifying the tasks most suitable for machine learning (ML). They applied this rubric to 950 occupations and 18,000 specific occupation tasks.

More tasks in lower-wage jobs could be replaced by machine learning applications, they found. For example, ML today can better recognize a cucumber or a banana and handle some cashier skills. But some high-paid jobs can also be impacted, such as airline pilots. No occupation is completely immune, Brynjolfsson said.

Certain industries are also more impacted than others, he noted. Manufacturing, retailing, transportation, and accommodation and food services have many tasks suitable for machine learning.

Additionally, different areas of the country will be affected unevenly. The kinds of work that people do in Wyoming are very different from what they do in Manhattan or Miami, he said.

The researchers data also allowed them to examine ML impact on individual occupations. Roles like tellers, executive assistants, and personal bankers have a large percentage of tasks that are suitable for ML.

Our tool gives them a way to have a path for what to do next, Brynjolfsson said. Personal bankers could develop more skills not subject to machine learning, like leadership, product development, or customer relations, and move away from the skills more suitable to ML like credit authorization. Another option: Find new roles with similar skill sets. A personal banker might transition to business analyst or mortgage loan officer, roles ML is less likely to disrupt.

With a little bit of training, they're in a position to be much less vulnerable to the machine learning revolution, he noted.

Read the original here:
How Work Will Change Following the Pandemic - Stanford University News

Best Report Machine Learning For Managing Diabetes Market (COVID 19 Updated) Climbs on Positive Outlook of Excellent Growth by 2027: Allscripts…

The report titled, Machine Learning For Managing Diabetes Market boons an in-depth synopsis of the competitive landscape of the market globally, thus helping establishments understand the primary threats and prospects that vendors in the market are dealt with. It also incorporates thorough business profiles of some of the prime vendors in the market. The report includes vast data relating to the recent discovery and technological expansions perceived in the market, wide-ranging with an examination of the impact of these intrusions on the markets future development.

This is a latest report, covering the current COVID-19 impact on the market. The pandemic of Coronavirus (COVID-19) has affected every aspect of life globally. Machine Learning For Managing Diabetes Market research reports growth rates and the market value based on market dynamics, growth factors. The complete knowledge is based on the latest innovations in the industry, opportunities, and trends. In addition to SWOT analysis by key suppliers, the report contains a comprehensive market analysis and major players landscape.

Ask for Sample Copy of This Report:https://www.healthcareintelligencemarkets.com/request_sample.php?id=29107

Top Key Players Included in This Report:

Allscripts Healthcare Solutions, Inc., Orion Health, Medecision, Inc., Emmi Solutions LLC, Mckesson Corporation, Cerner Corporation and Getwellnetwork, Inc.

Market, By regions:

The reports conclusion centrals around the complete scope of the global Machine Learning For Managing Diabetes Market with respect to the availability of funds from investors and a descriptive passage outlines the feasibility of new projects that might succeed in the market in the upcoming years.

Get Discount on This Report:https://www.healthcareintelligencemarkets.com/ask_for_discount.php?id=29107

Major highlights of this research report:

Table of Content:

Chapter 1 Industry Overview of Machine Learning For Managing Diabetes Market

Chapter 2 Manufacturing Cost Structure Analysis

Chapter 3 Technical Data and Manufacturing Plants

Chapter 4 Overall Market Overview

Chapter 5 Regional Market Analysis

Chapter 6 Major Manufacturers Analysis

Chapter 7 Development Trend of Analysis

Chapter 8 Marketing Type Analysis

Chapter 9 Conclusion of the Global Machine Learning For Managing Diabetes Market Professional Survey Report 2020

Chapter 10 Continue.

For Any Customization, Ask Our Experts:https://www.healthcareintelligencemarkets.com/enquiry_before_buying.php?id=29107

*If you have any special requirements, please let us know and we will offer you the report as per your requirements.

About Us:

HealthCare Intelligence Markets Reports provides market intelligence & consulting services to a global clientele spread over 145 countries. Being a B2B firm, we help businesses to meet the challenges of an ever evolving market with unbridled confidence. We craft customized and syndicated market research reports that help market players to build game changing strategies. Besides, we also provide upcoming trends & future market prospects in our reports pertaining to Drug development, Clinical & healthcare industries. Our intelligence enables our clients to take decisions with which in turn proves a game-changer for them. We constantly strive to serve our clients better by directly allowing them sessions with our research analysts so the report is at par with their expectations.

Contact Us:

Marvella Lit

Address:90, State Office Center,

90, State Street Suite 700,

Albany, NY 12207

Email:[emailprotected]

Web:www.healthcareintelligencemarkets.com

Phone:+44-753-712-1342

See original here:
Best Report Machine Learning For Managing Diabetes Market (COVID 19 Updated) Climbs on Positive Outlook of Excellent Growth by 2027: Allscripts...

The startup making deep learning possible without specialized hardware – MIT Technology Review

GPUs became the hardware of choice for deep learning largely by coincidence. The chips were initially designed to quickly render graphics in applications such as video games. Unlike CPUs, which have four to eight complex cores for doing a variety of computation, GPUs have hundreds of simple cores that can perform only specific operationsbut the cores can tackle their operations at the same time rather than one after another, shrinking the time it takes to complete an intensive computation.

It didnt take long for the AI research community to realize that this massive parallelization also makes GPUs great for deep learning. Like graphics-rendering, deep learning involves simple mathematical calculations performed hundreds of thousands of times. In 2011, in a collaboration with chipmaker Nvidia, Google found that a computer vision model it had trained on 2,000 CPUs to distinguish cats from people could achieve the same performance when trained on only 12 GPUs. GPUs became the de facto chip for model training and inferencingthe computational process that happens when a trained model is used for the tasks it was trained for.

But GPUs also arent perfect for deep learning. For one thing, they cannot function as a standalone chip. Because they are limited in the types of operations they can perform, they must be attached to CPUs for handling everything else. GPUs also have a limited amount of cache memory, the data storage area nearest a chips processors. This means the bulk of the data is stored off-chip and must be retrieved when it is time for processing. The back-and-forth data flow ends up being a bottleneck for computation, capping the speed at which GPUs can run deep-learning algorithms.

NEURAL MAGIC

In recent years, dozens of companies have cropped up to design AI chips that circumvent these problems. The trouble is, the more specialized the hardware, the more expensive it becomes.

So Neural Magic intends to buck this trend. Instead of tinkering with the hardware, the company modified the software. It redesigned deep-learning algorithms to run more efficiently on a CPU by utilizing the chips large available memory and complex cores. While the approach loses the speed achieved through a GPUs parallelization, it reportedly gains back about the same amount of time by eliminating the need to ferry data on and off the chip. The algorithms can run on CPUs at GPU speeds, the company saysbut at a fraction of the cost. It sounds like what they have done is figured out a way to take advantage of the memory of the CPU in a way that people havent before, Thompson says.

Neural Magic believes there may be a few reasons why no one took this approach previously. First, its counterintuitive. The idea that deep learning needs specialized hardware is so entrenched that other approaches may easily be overlooked. Second, applying AI in industry is still relatively new, and companies are just beginning to look for easier ways to deploy deep-learning algorithms. But whether the demand is deep enough for Neural Magic to take off is still unclear. The firm has been beta-testing its product with around 10 companiesonly a sliver of the broader AI industry.

We want to improve not just neural networks but also computing overall.

Neural Magic currently offers its technique for inferencing tasks in computer vision. Clients must still train their models on specialized hardware but can then use Neural Magics software to convert the trained model into a CPU-compatible format. One client, a big manufacturer of microscopy equipment, is now trialing this approach for adding on-device AI capabilities to its microscopes, says Shavit. Because the microscopes already come with a CPU, they wont need any additional hardware. By contrast, using a GPU-based deep-learning model would require the equipment to be bulkier and more power hungry.

Another client wants to use Neural Magic to process security camera footage. That would enable it to monitor the traffic in and out of a building using computers already available on site; otherwise it might have to send the footage to the cloud, which could introduce privacy issues, or acquire special hardware for every building it monitors.

Shavit says inferencing is also only the beginning. Neural Magic plans to expand its offerings in the future to help companies train their AI models on CPUs as well. We believe 10 to 20 years from now, CPUs will be the actual fabric for running machine-learning algorithms, he says.

Thompson isnt so sure. The economics have really changed around chip production, and that is going to lead to a lot more specialization, he says. Additionally, while Neural Magics technique gets more performance out of existing hardware, fundamental hardware advancements will still be the only way to continue driving computing forward. This sounds like a really good way to improve performance in neural networks, he says. But we want to improve not just neural networks but also computing overall.

See original here:
The startup making deep learning possible without specialized hardware - MIT Technology Review

If AI is going to help us in a crisis, we need a new kind of ethics – MIT Technology Review

What opportunities have we missed by not having these procedures in place?

Its easy to overhype whats possible, and AI was probably never going to play a huge role in this crisis. Machine-learning systems are not mature enough.

But there are a handful of cases in which AI is being tested for medical diagnosis or for resource allocation across hospitals. We might have been able to use those sorts of systems more widely, reducing some of the load on health care, had they been designed from the start with ethics in mind.

With resource allocation in particular, you are deciding which patients are highest priority. You need an ethical framework built in before you use AI to help with those kinds of decisions.

So is ethics for urgency simply a call to make existing AI ethics better?

Thats part of it. The fact that we dont have robust, practical processes for AI ethics makes things more difficult in a crisis scenario. But in times like this you also have greater need for transparency. People talk a lot about the lack of transparency with machine-learning systems as black boxes. But there is another kind of transparency, concerning how the systems are used.

This is especially important in a crisis, when governments and organizations are making urgent decisions that involve trade-offs. Whose health do you prioritize? How do you save lives without destroying the economy? If an AI is being used in public decision-making, transparency is more important than ever.

What needs to change?

We need to think about ethics differently. It shouldnt be something that happens on the side or afterwardssomething that slows you down. It should simply be part of how we build these systems in the first place: ethics by design.

I sometimes feel ethics is the wrong word. What were saying is that machine-learning researchers and engineers need to be trained to think through the implications of what theyre building, whether theyre doing fundamental research like designing a new reinforcement-learning algorithm or something more practical like developing a health-care application. If their work finds its way into real-world products and services, what might that look like? What kinds of issues might it raise?

Some of this has started already. We are working with some early-career AI researchers, talking to them about how to bring this way of thinking to their work. Its a bit of an experiment, to see what happens. But even NeurIPS [a leading AI conference] now asks researchers to include a statement at the end of their papers outlining potential societal impacts of their work.

Youve said that we need people with technical expertise at all levels of AI design and use. Why is that?

Im not saying that technical expertise is the be-all and end-all of ethics, but its a perspective that needs to be represented. And I dont want to sound like Im saying all the responsibility is on researchers, because a lot of the important decisions about how AI gets used are made further up the chain, by industry or by governments.

But I worry that the people who are making those decisions dont always fully understand the ways it might go wrong. So you need to involve people with technical expertise. Our intuitions about what AI can and cant do are not very reliable.

What you need at all levels of AI development are people who really understand the details of machine learning to work with people who really understand ethics. Interdisciplinary collaboration is hard, however. People with different areas of expertise often talk about things in different ways. What a machine-learning researcher means by privacy may be very different from what a lawyer means by privacy, and you can end up with people talking past each other. Thats why its important for these different groups to get used to working together.

Youre pushing for a pretty big institutional and cultural overhaul. What makes you think people will want to do this rather than set up ethics boards or oversight committeeswhich always make me sigh a bit because they tend to be toothless?

Yeah, I also sigh. But I think this crisis is forcing people to see the importance of practical solutions. Maybe instead of saying, Oh, lets have this oversight board and that oversight board, people will be saying, We need to get this done, and we need to get it done properly.

Follow this link:
If AI is going to help us in a crisis, we need a new kind of ethics - MIT Technology Review