Archive for the ‘Artificial Intelligence’ Category

Artificial Intelligence in Cancer: How Is It Used in Practice? – Cancer Therapy Advisor

Artificialintelligence (AI) comprises a type of computer science that develops entities,such as software programs, that can intelligently perform tasks or makedecisions.1 The development and use of AI in health care is not new;the first ideas that created the foundation of AI were documented in 1956, andautomated clinical tools that were developed between the 1970s and 1990s arenow in routine use. These tools, such as the automated interpretation ofelectrocardiograms, may seem simple, but are considered AI.

Today,AI is being harnessed to help with big problems in medicine such asprocessing and interpreting large amounts of data in research and in clinicalsettings, including reading imaging or results from broad genetic-testingpanels.1 In oncology, AI is not yet being used broadly, but its useis being studied in several areas.

Screeningand Diagnosis

Thereare several AI platforms approved by the US Food and Drug Administration (FDA)to assist in the evaluation of medical imaging, including for identifyingsuspicious lesions that may be cancer.2 Some platforms help tovisualize and manipulate images from magnetic resonance imaging (MRI) orcomputed tomography (CT) and flag suspicious areas. For example, there are severalAI platforms for evaluating mammography images and, in some cases, help todiagnose breast abnormalities. There is also an AI platform that helps toanalyze lung nodules in individuals who are being screened for lung cancer.1,3

AI isalso being studied in other areas of cancer screening and diagnosis. Indermatology, skin lesions are biopsied based on a dermatologists or primarycare providers assessment of the appearance of the lesion.1 Studiesare evaluating the use of AI to either supplement or replace the work of theclinician, with the ultimate goal of making the overall process moreefficient.

Big Data

Astechnology has improved, we now have the ability to create a vast amount ofdata. This highlights a challenge individuals have limited capabilities toassess large chunks of data and identify meaningful patterns. AI is beingdeveloped and used to help mine these data for important findings, process andcondense the information the data represent, and look for meaningful patterns.

Such toolswould be useful in the research setting, as scientists look for novel targetsfor new anticancer therapies or to further their understanding of underlyingdisease processes. AI would also be useful in the clinical setting, especiallynow that electronic health records are being used and real-world data are beinggenerated from patients.

Read this article:
Artificial Intelligence in Cancer: How Is It Used in Practice? - Cancer Therapy Advisor

Course introduces students to the promise, challenges, of artificial intelligence in health – HSPH News

May 15, 2020In the race to stem COVID-19, researchers around the world are testing the capacity of artificial intelligence (AI) to assist in tasks such as diagnosis and drug discovery. So far, AIs biggest success during the pandemic has been in speeding up the process of identifying existing drugs that can be repurposed to help suffering patients, said Deborah DiSanzo, who recently lectured on COVID-19 in the new course shes leading at Harvard T.H. Chan School of Public HealthArtificial Intelligence in Health.

DiSanzo cited in her lecture an AI knowledge graph developed by researchers at the UK startup BenevolentAI and the Imperial College London, which found that baricitinib, a rheumatoid arthritis drug, had the potential to inhibit the virus that causes COVID-19. It and other drugs identified in similar studies have now gone into clinical trials.

Two years ago, finding either a new or repurposed drug target would take six to 18 months, said DiSanzo, a former health care technology executive. These researchers did this in weeks.

Diagnosing COVID-19, however, has been less successful for AI so far, she said, with the limited lung imagery currently available from COVID-19 patients making it difficult for neural networks to learn the difference between the effects of the virus and standard pneumonia.

Enhance, not replace

For DiSanzos students, these mixed results provided a timely example of one of her courses main takeaways: AI can enhance health care delivery and research, but its not a replacement for the knowledge and skill of providers and scientists.

Im really excited about the technology and potential application of AI, said Nimerta Sandhu, MPH20, an MD candidate at Drexel University College of Medicine. This course provided insights on technology solutions that offer added value and others that have room for improvement. One of the biggest challenges is going to be ensuring that, as we incorporate more AI in our work, it doesnt detract from the empathy essential in the patient-provider relationship.

I want students to have a realistic view of what artificial intelligence can bring to public health, DiSanzo said. People usually have either a very positive viewthat its magic and can solve all the worlds problemsor they have a very negative view, that its biased and doesnt give accurate results. She said that she wants students to leave her course knowing the right questions to ask, because its likely to be a part of their jobs, whether they are in practice or policy.

Business background

Prior to joining Harvard Chan School, DiSanzos roles included CEO of Philips Healthcare, and general manager of IBM Watson Health, the IBM business unit founded to advance artificial intelligence in health. Last year, as a Harvard Advanced Leadership Initiative Fellow, she was encouraged by the programs faculty chair Meredith Rosenthal, C. Boyden Gray Professor of Health Economics and Policy, to develop a course for MPH students.

DiSanzo hadnt planned to cover COVID-19 as she worked on her syllabus in January, but as the full extent of the pandemic emerged, she added it to her list of lecture topicswhich also included drug discovery, medical imaging, and patient monitoring.

While the spring semesters move to online learning required the first-time instructor to pivot on the fly, DiSanzo has been delighted with the results so far, she said. Her 24 studentswho include physicians, a veterinarian, and a psychologisthave been very engaged, participating actively on discussion boards and in chats with guests including executives from Google and pharmaceutical companies.

DiSanzo hesitates to make predictions about the future of AI in health, noting the fields history of overly optimistic projections. But things are different today, she said. In recent years, computing power, available data, and neural network capacity have advanced by leaps and bounds. Its likely that in 10 yearsmaybe even fiveevery health care or public health decision that we make, or care that we give, or diagnosis that we make, will be made with some help from artificial intelligence, DiSanzo said. And with the COVID-19 pandemic pushing the field forward at even faster rates, she said, the next advancements may be just months away.

Amy Roeder

Illustration: Alina_Bukhtii/Shutterstock

More here:
Course introduces students to the promise, challenges, of artificial intelligence in health - HSPH News

AI, machine learning, and blockchain are key for healthcare innovation – Health Europa

A special, peer-reviewed edition of OMICS: A Journal of Integrative Biology, has highlighted the importance of key digital technologies, including Artificial Intelligence (AI), machine learning, and blockchain for innovation in healthcare in response to the challenges posed by COVID-19.

Vural zdemir, MD, PhD, Editor-in-Chief ofOMICS, said: COVID-19 is undoubtedly among the ecological determinants of planetary health. Digital health is a veritable opportunity for integrative biology and systems medicine to broaden its scope from human biology to ecological determinants of health. This is very important.

Articles in the special issue include an interview on Responsible Innovation and Future Science in Australia byJustine Lacey, Commonwealth Scientific and Industrial Research Organisation (CSIRO), and Erik Fisher, Arizona State University, Tempe, Blockchain for Digital Health: Prospects and Challenges and Integrating Artificial and Human Intelligence: A Partnership for Responsible Innovation in Biomedical Engineering and Medicine.

In Blockchain for Digital Health: Prospects and Challenges the article explores the challenges that can be faced with the use of blockchain technology.

The article states: Although still faced with challenges, blockchain technology has an enormous potential to catalyse both technological and social innovation, turning the promise of digital health into a reality. By reshaping both the technological and social environment, the rise of blockchain in digital health can help reduce the disparity between the enormous technical progress and investments versus our currently inadequate understanding of the social dimensions of emerging technologies through commensurate investments in the latter knowledge domain.

A recent report by Market Study Report, Blockchain Technology in Healthcare Market, notes that blockchain technology in the healthcare market is anticipated to cross $1636.7m (1513.46m) by the year 2025.

Privacy is a major concern when it comes to storing and sharing health data, and with current healthcare data storage systems lacking top end security, blockchain can provide a solution to vulnerabilities such as hacking and data theft.

Blockchain technology in healthcare offers interoperability, which enables exchange of medical data securely among the different systems and personnel involved, offering a variety of benefits such as effective communication system, time reduction, and enhanced operational efficiency.

According to the report, the use of blockchain technology for claims adjudication and billing management application is predicted to register 66.5% growth by the year 2025, owing to several issues such as errors, duplications, and incorrect billing. All of these problems can be eliminated with blockchain.

Nearly 400 individuals including doctors were convicted for $1.3bn (1.2m) fraud in 2017 in the United States. The report highlights that the need to mitigate such frauds and fake drug supply will encourage the adoption of technology in this application segment.

See original here:
AI, machine learning, and blockchain are key for healthcare innovation - Health Europa

Artificial Intelligence Markets in IVD, 2019-2024: Breakdown by Application and Component – GlobeNewswire

Dublin, May 15, 2020 (GLOBE NEWSWIRE) -- The "Artificial Intelligence Markets in IVD" report has been added to ResearchAndMarkets.com's offering.

This report examines selected AI-based initiatives, collaborations, and tests in various in vitro diagnostic (IVD) market segments.

Artificial Intelligence Markets in IVD contains the following important data points:

The past few years have seen extraordinary advances in artificial intelligence (AI) in clinical medicine. More products have been cleared for clinical use, more new research-use-only applications have come to market and many more are in development.

In recent years, diagnostics companies - in collaboration with AI companies - have begun implementing increasingly sophisticated machine learning techniques to improve the power of data analysis for patient care. The goal is to use developed algorithms to standardize and aid interpretation of test data by any medical professional irrespective of expertise. This way AI technology can assist pathologists, laboratorians, and clinicians in complex decision-making.

Digital pathology products and diabetes management devices were the first to come to market with data interpretation applications. The last few years have seen the use of AI interpretation apps extended to a broader range of products including microbiology, disease genetics, and cancer precision medicine.

This report will review some of the AI-linked tests and test services that have come to market and others that are in development in some of the following market segments:

Applications of AI are evolving that predict outcomes such as diagnosis, death, or hospital readmission; that improve upon standard risk assessment tools; that elucidate factors that contribute to disease progression; or that advance personalized medicine by predicting a patient's response to treatment. AI tools are in use and in development to review data and to uncover patterns in the data that can be used to improve analyses and uncover inefficiencies. Many enterprises are joining this effort.

The following are among the companies and institutions whose innovations are featured in Artificial Intelligence Markets in IVD:

Key Topics Covered

Chapter 1: Executive Summary

Chapter 2: Artificial Intelligence In Diagnostics Markets

Chapter 3: Market Analysis: Artificial Intelligence in Diagnostics

For more information about this report visit https://www.researchandmarkets.com/r/vw8l7u

Research and Markets also offers Custom Research services providing focused, comprehensive and tailored research.

The rest is here:
Artificial Intelligence Markets in IVD, 2019-2024: Breakdown by Application and Component - GlobeNewswire

Artificial intelligence is struggling to cope with how the world has changed – ZDNet

From our attitude towards work to our grasp of what two metres look like, the coronavirus pandemic has made us rethink how we see the world. But while we've found it hard to adjust to the new reality, it's been even harder for the narrowly-designed artificial intelligence models that have been created to help organisation make decisions. Based on data that described the world before the crisis, these won't be making correct predictions anymore, pointing to a fundamental problem in they way AI is being designed.

David Cox, IBM director of the MIT-IBM Watson AI Lab, explains that faulty AI is particularly problematic in the case of so-called black box predictive models: those algorithms which work in ways that are not visible, or understandable, to the user. "It's very dangerous," Cox says, "if you don't understand what's going on internally within a model in which you shovel data on one end to get a result on the other end. The model is supposed to embody the structure of the world, but there is no guarantee that it will keep working if the world changes."

The COVID-19 crisis, according to Cox, has only once more highlighted what AI experts have argued for decades: that algorithms should be more explainable.

SEE: How to implement AI and machine learning (ZDNet special report) | Download the report as a PDF (TechRepublic)

For example, if you were building a computer program that was a complete blackbox, aimed at predicting what the stock market would be like based on past data, there is no guarantee it's going to continue to produce good predictions in the current coronavirus crisis, he argues.

What you actually need to do is build a broader model of the economy that acknowledges supply and demand, understands supply-chains, and incorporates that knowledge, which is closer to something that an economist would do. Then you can reason about the situation more transparently, he says.

"Part of the reason why those models are hard to trust with narrow AIs is because they don't have that structure. If they did it would be much easier for a model to provide an explanation for why they are making decisions. These models are experiencing challenges now. COVID-19 has just made it very clear why that structure is important," he warns.

It's important not only because the technology would perform better and gain in reliability, but also because businesses would be far less reluctant to adopt AI if they trusted the tool more. Cox pulls out his own statistics on the matter: while 95% of companies believe that AI is key to their competitive advantage, only 5% say they've extensively implemented the technology.

While the numbers differ from survey to survey,the conclusion has been the same for some time now: there remains a significant gap between the promise of AI and its reality for businesses. And part of the reason that industry is struggling to deploy the technology boils down to a lack of understanding of AI. If you build a great algorithm but can't explain how it works, you can't expect workers to incorporate the new tool in their business flow. "If people don't understand or trust those tools, it's going to be a lost cause," says Cox.

Explaining AI is one of the main focuses of Cox's work. The MIT-IBM Watson Lab, which he co-directs, comprises of 100 AI scientists across the US university and IBM Research, and is now in its third year of operation. The Lab's motto, which comes up first thing on its website, is self-explanatory: "AI science for real-world impact".

Back in 2017, IBM announced a $240 million investment over ten years to support research by the firm's own researchers, as well as MIT's, in the newly-founded Watson AI Lab. From the start, the collaboration's goal has had a strong industry focus, with an idea to unlock the potential of AI for "business and society". The lab's focus is not on "narrow AI", which is the technology in its limited format that most organizations know today; instead the researchers should be striving for "broad AI". Broad AI can learn efficiently and flexibly, across multiple tasks and data streams, and ultimately has huge potential for businesses. "Broad AI is next," is the Lab's promise.

The only way to achieve broad AI, explains Cox, is to bridge between research and industry. The reason that AI, like many innovations, remains stubbornly stuck in the lab, is because the academics behind the technology struggle to identify and respond to the real-world needs of businesses. Incentives are misaligned; the result is that organizations see the potential of the tool, but struggle to use it. AI exists and it is effective, but is still not designed for business.

SEE: Developers say Google's Go is 'most sought after' programming language of 2020

Before he joined IBM, Cox spent ten years as a professor in Harvard University. "Coming from academia and now working for IBM, my perspective on what's important has completely changed," says the researcher. "It has given me a much clearer picture of what's missing."

The partnership between IBM and MIT is a big shift from the traditional way that academia functions. "I'd rather be there in the trenches, developing those technologies directly with the academics, so that we can immediately take it back home and integrate it into our products," says Cox. "It dramatically accelerates the process of getting innovation into businesses."

IBM has now expanded the collaboration to some of its customers through a member program, which means that researchers in the Lab benefit from the input of players from different industries. From Samsung Electronics to Boston Scientific through banking company Wells Fargo, companies in various fields and locations can explain their needs and the challenges they encounter to the academics working in the AI Watson Lab. In turn, the members can take the intellectual property generated in the Lab and run with it even before it becomes an IBM product.

Cox is adamant, however, that the MIT-IBM Watson AI Lab was also built with blue-sky research compatibility in mind. The researchers in the lab are working on fundamental, cross-industry problems that need to be solved in order to make AI more applicable. "Our job isn't to solve customer problems," says Cox. "That's not the right use for the tool that is MIT. There are brilliant people in MIT that can have a hugely disruptive impact with their ideas, and we want to use that to resolve questions like: why is it that AI is so hard to use or impact in business?"

Explainability of AI is only one area of focus. But there is also AutoAI, for example, which consists of using AI to build AI models, and would let business leaders engage with the technology without having to hire expensive, highly-skilled engineers and software developers. Then, there is also the issue of data labeling: according to Cox, up to 90% of the data science project consists of meticulously collecting, labeling and curating the data. "Only 10% of the effort is the fancy machine-learning stuff," he says. "That's insane. It's a huge inhibitor to people using AI, let alone to benefiting from it."

SEE: AI and the coronavirus fight: How artificial intelligence is taking on COVID-19

Doing more with less data, in fact, was one of the key features of the Lab's latest research project, dubbed Clevrer, in which an algorithm can recognize objects and reason about their behaviors in physical events from videos. This model is a neuro-symbolic one, meaning that the AI can learn unsupervised, by looking at content and pairing it with questions and answers; ultimately, it requires far less training data and manual annotation.

All of these issues have been encountered one way or another not only by IBM, but by the companies that signed up to the Lab's member program. "Those problems just appear again and again," says Cox and that's whether you are operating in electronics or med-tech or banking. Hearing similar feedback from all areas of business only emboldened the Lab's researchers to double down on the problems that mattered.

The Lab has about 50 projects running at any given time, carefully selected every year by both MIT and IBM on the basis that they should be both intellectually interesting, and effectively tackling the problem of broad AI. Cox maintains that within this portfolio, some ideas are very ambitious and can even border blue-sky research; they are balanced, on the other hand, with other projects that are more likely to provide near-term value.

Although more prosaic than the idea of preserving purely blue-sky research, putting industry and academia in the same boat might indeed be the most pragmatic solution in accelerating the adoption of innovation and making sure AI delivers on its promise.

Follow this link:
Artificial intelligence is struggling to cope with how the world has changed - ZDNet