Archive for the ‘Artificial Intelligence’ Category

How Artificial Intelligence Is Humanizing the Healthcare Industry – HealthITAnalytics.com

December 17, 2019 -Seventy-nine percent of healthcare professionals indicate that artificial intelligence tools have helped mitigate clinician burnout, suggesting that the technology enables providers to deliver more engaging, patient-centered care, according to a survey conducted by MIT Technology Review and GE Healthcare.

As artificial intelligence tools have slowly made their way into the healthcare industry, many have voiced concerns that the technology will remove the human aspect of patient care, leaving individuals in the care of robots and machines.

Healthcare institutions have been anticipating the impact that artificial intelligence (AI) will have on the performance and efficiency of their operations and their workforcesand the quality of patient care, the report stated.

Contrary to common, yet unproven, fears that machines will replace human workers, AI technologies in health care may actually be re-humanizing healthcare, just as the system itself shifts to value-based care models that may favor the outcome patients receive instead of the number of patients seen.

Through interviews with over 900 healthcare professionals, researchers found that providers are already using AI to improve data analysis, enable better treatment and diagnosis, and reduce administrative burdensall of which free up clinicians time to perform other tasks.

READ MORE: Using Artificial Intelligence to Strengthen Suicide Prevention

Numerous technologies are in play today to allow healthcare professionals to deliver the best care, increasingly customized to patients, and at lower costs, the report said.

Our survey has found medical professionals are already using AI tools, to improve both patient care and back-end business processes, from increasing the accuracy of oncological diagnosis to increasing the efficiency of managing schedules and workflow.

The survey found that medical staff with pilot AI projects spend one-third less time writing reports, while those with extensive AI programs spend two-thirds less time writing reports. Additionally, 45 percent of participants said that AI has helped increase consultation time, as well as time to perform surgery and other procedures.

For those with the most extensive AI rollouts, 70 percent expect to spend more time performing procedures than doing administrative or other work.

AI is being used to assume many of a physicians more mundane administrative responsibilities, such as taking notes or updating electronic health records, researchers said. The more AI is deployed, the less time doctors spend at their computers.

READ MORE: Patient, Provider Support Key to Healthcare Artificial Intelligence

Respondents also indicated that AI is helping them gain an edge in the healthcare market. Eighty percent of business and administrative healthcare professionals said that AI is helping them improve revenue opportunities, while 81 percent said they think AI will make them more competitive providers.

The report also showed that AI-related projects will continue to receive an increasing portion of healthcare spending now and in the future. Seventy-nine percent of respondents said they will be spending more to develop AI applications.

Respondents also indicated that AI has increased the operational efficiency of healthcare organizations. Seventy-eight percent of healthcare professionals said that their AI deployments have already created workflow improvements in areas including schedule management.

Using AI to optimize schedule management and other administrative tasks creates opportunities to leverage AI for more patient-facing applications, allowing clinicians to work with patients more closely.

AIs core value proposition is in both improving diagnosing abilities and reducing regulatory and data complexities by automating and streamlining workflow. This allows healthcare professionals to harness the wealth of insight the industry is generating, without drowning in it, the report said.

READ MORE: GE Launches Program to Ease Artificial Intelligence Adoption

AI has also helped healthcare professionals reduce clinical errors. Medical staff who dont use AI cited fighting clinical error as a key challenge two-thirds of the timemore than double that of medical staff who have AI deployments.

Additionally, advanced tools are helping users identify and treat clinical issues. Seventy-five percent of respondents agree that AI has enabled better predictions in the treatment of disease.

AI-enabled decision-support algorithms allow medical teams to make more accurate diagnoses, researchers noted.

This means doing something big by doing something really small: noticing minute irregularities in patient information. That could be the difference between acting on a life-threatening issueor missing it.

While AI has shown a lot of promise in the industry, the technology still comes with challenges. Fifty-seven percent of respondents said that integrating AI applications into existing systems is challenging, and more than half of professionals planning to deploy AI raise concerns about medical professional adoption, support from top management, and technical support.

To overcome these challenges, researchers recommended that clinical staff collaborate to implement and deploy AI tools.

AI needs to work for healthcare professionals as part of a robust, integrated ecosystem. It needs to be more than deploying technologyin fact, the more humanized the application of AI is, the more it will be adopted and improve results and return on investment. After all, in healthcare, the priority is the patient, researchers concluded.

Read the rest here:

How Artificial Intelligence Is Humanizing the Healthcare Industry - HealthITAnalytics.com

Zebra Medical Vision Announces Agreement With DePuy Synthes to Deploy Cloud Based Artificial Intelligence Orthopaedic Surgical Planning Tools -…

KIBBUTZ SHEFAYIM, Israel--(BUSINESS WIRE)--Zebra Medical Vision, the deep learning medical imaging analytics company, announces today a global co-development and commercialization agreement with DePuy Synthes* to bring Artificial Intelligence (AI) opportunities to orthopaedics, based on imaging data.

Every year, millions of orthopaedic procedures worldwide use traditional two-dimensional (2D) CT scans or MRI imaging to assist with pre-operative planning. CT scans and MRI imaging can be expensive, and CT scans are associated with more radiation and are uncomfortable for some patients. Zebra-Meds technology uses algorithms to create three-dimensional (3D) models from X-ray images. This technology aims to bring affordable pre-operative surgical planning to surgeons worldwide without the need for traditional MRI or CT-based imaging.

We are thrilled to start this collaboration and have the opportunity to impact and improve orthopaedic procedures and outcomes in areas including the knee, hip, shoulder, trauma, and spine care, says Eyal Gura, Co-Founder and CEO of Zebra Medical Vision. We share a common vision surrounding the impact we can have on patients lives through the use of AI, and we are happy to initiate such a meaningful strategic partnership, leveraging the tools and knowledge we have built around bone health AI in the last five years.

This technology is planned to be introduced as part of DePuy Synthes VELYS Digital Surgery solutions for pre-operative, operative, and post-operative patient care.

Read more on Zebra-Meds blog: https://zebramedblog.wordpress.com/another-dimension-to-zebras-ai-how-we-impact-the-orthopedic-world

About Zebra Medical VisionZebra Medical Visions imaging analytics platform allows healthcare institutions to identify patients at risk of disease and offer improved, preventative treatment pathways, to improve patient care. The company is funded by Khosla Ventures, Marc Benioff, Intermountain Investment Fund, OurCrowd Qure, Aurum, aMoon, Nvidia, Johnson & Johnson Innovation JJDC, Inc. (JJDC) and Dolby Ventures. Zebra Medical Vision has raised $52 million in funding to date, and was named a Fast Company Top-5 AI and Machine Learning company. Zebra-Med is a global leader in AI FDA cleared products, and is installed in hospitals globally, from Australia to India, Europe to the U.S, and the LATAM region.

*Agreement is between DePuy Ireland Unlimited Company and Zebra Medical Vision.

Read the original post:

Zebra Medical Vision Announces Agreement With DePuy Synthes to Deploy Cloud Based Artificial Intelligence Orthopaedic Surgical Planning Tools -...

Top Artificial Intelligence Books Released In 2019 That You Must Read – Analytics India Magazine

Artificial Intelligence has had many breakthroughs in 2019. In fact, we can go as far as to say that it has trickled down to every single facet of modern life. With its intervention in our daily life, it is imperative that everyone knows about how it is affecting our lives, bringing about change in it, the threats and possible solutions.

While there are some people who still think AI is only robots and chatbots, it is important that they know of the advancements in the field. There are many online courses and books on artificial intelligence that give a comprehensive understanding to the reader whether it is a professional or an AI enthusiast.

In this article, we have compiled a list of books on artificial intelligence published in 2019 that one can use to learn more about this fascinating technology:

Written by Dr Eric Topol, an American cardiologist, geneticist and digital medicine researcher, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, is Amazon #1 bestseller this year.

This book boldly sets out the potential of AI in healthcare and deep medicine. Topol calls AI the next industrial revolution. The book contains short examples to highlight AIs importance along with a proper expansion on likely AI is going to transform the medical industry. Topol believes that AI can not only help in enhancing diagnosis and treatment but also help them in saving time in other activities like taking notes, reading scans which will eventually help them to spend more time on the patients. This is a resourceful book for someone interested in AI and its impact on healthcare.

Written by Dr Stuart Russell, Human Compatible: AI and the Problem of Control is possibly one of the most important books of this year on AI. The book talks about the threats by artificial intelligence and solutions to it. The author, Stuart Russell, makes use dry humour not to make his book sound like a boring information magazine.

The book is for both the public and AI researches, Stuart Russel, in this doesnt hammer AI, he points out the threats and solution as someone who feels a sense of responsibility towards the changes and revolution his own field is bringing.

This book is written by Marcus du Sautoy, a professor of mathematics at the University of Oxford and a researcher fellow at the Royal Society.

This book is a fact-packed, funny journey to the world of AI. It questions the present meaning of the word creativity and about how the machine will be able to crack the code on human emotions.

This book dances around the concept of using AI assistance in art-making. The book discusses the math behind ML and AI as its centre point of discussion in art.

Janelle Shanes AIwierdness.com is an AI humour blog and looks to have a different take on AI, the part of AI. In this book, the author makes use of humorous cartoons and pop-culture illustrations to try and take a look inside the algorithms that are used in machine learning.

The authors of this book Gary Marcus, a scientist and the founder and CEO of Robust.AI and Ernest Davis, a professor of computer science at NYU tell what AI is, what it is not, its potentials if we worked towards it with more resilience and be more creative. Many authors seem to hype up AI, not just the good part about it but also the wrong side about it. The authors here seem to have found the balance in between.

The book, Rebooting AI: Building Artificial Intelligence We Can Trust, highlights the weaknesses of the current technology, where it is going wrong and what should we be doing to find the solutions. It isnt just some book that only researchers can read but also for the general public. It illustrates many examples and excellent use of humour wherever needed.

The first edition of the series of books written by the Alex Castrounis, answer one of the most critical questions in todays age concerning business and AI, How can I build a successful business by using AI?

The AI for People and Business: A Framework for Better Human Experiences and Business Success is exclusively written for anyone interested in making use of AI in their organisation.

The author examines the value of Ai and gives solutions for developing an AI strategy that benefits both people and businesses.

This book by Andriy Burkov remains true to its name and just manages to do the seemingly impossible task of trying to bundle all of the machine learning inside of a hundred-page book.

This book provides an in-depth introduction to the field of machine learning with the smart choice of topics for both theory and practice.

If you are new to the field of machine learning, then this book gives you a comprehensive introduction to the vocabulary/ terminology.

comments

Excerpt from:

Top Artificial Intelligence Books Released In 2019 That You Must Read - Analytics India Magazine

Why video games and board games arent a good measure of AI intelligence – The Verge

Measuring the intelligence of AI is one of the trickiest but most important questions in the field of computer science. If you cant understand whether the machine youve built is cleverer today than it was yesterday, how do you know youre making progress?

At first glance, this might seem like a non-issue. Obviously AI is getting smarter is one reply. Just look at all the money and talent pouring into the field. Look at the milestones, like beating humans at Go, and the applications that were impossible to solve a decade ago that are commonplace today, like image recognition. How is that not progress?

Another reply is that these achievements arent really a good gauge of intelligence. Beating humans at chess and Go is impressive, yes, but what does it matter if the smartest computer can be out-strategized in general problem-solving by a toddler or a rat?

This is a criticism put forward by AI researcher Franois Chollet, a software engineer at Google and a well-known figure in the machine learning community. Chollet is the creator of Keras, a widely used program for developing neural networks, the backbone of contemporary AI. Hes also written numerous textbooks on machine learning and maintains a popular Twitter feed where he shares his opinions on the field.

In a recent paper titled On the Measure of Intelligence, Chollet also laid out an argument that the AI world needs to refocus on what intelligence is and isnt. If researchers want to make progress toward general artificial intelligence, says Chollet, they need to look past popular benchmarks like video games and board games, and start thinking about the skills that actually make humans clever, like our ability to generalize and adapt.

In an email interview with The Verge, Chollet explained his thoughts on this subject, talking through why he believes current achievements in AI have been misrepresented, how we might measure intelligence in the future, and why scary stories about super intelligent AI (as told by Elon Musk and others) have an unwarranted hold on the publics imagination.

This interview has been lightly edited for clarity.

In your paper, you describe two different conceptions of intelligence that have shaped the field of AI. One presents intelligence as the ability to excel in a wide range of tasks, while the other prioritizes adaptability and generalization, which is the ability for AI to respond to novel challenges. Which framework is a bigger influence right now, and what are the consequences of that?

In the first 30 years of the history of the field, the most influential view was the former: intelligence as a set of static programs and explicit knowledge bases. Right now, the pendulum has swung very far in the opposite direction: the dominant way of conceptualizing intelligence in the AI community is the blank slate or, to use a more relevant metaphor, the freshly-initialized deep neural network. Unfortunately, its a framework thats been going largely unchallenged and even largely unexamined. These questions have a long intellectual history literally decades and I dont see much awareness of this history in the field today, perhaps because most people doing deep learning today joined the field after 2016.

Its never a good thing to have such intellectual monopolies, especially as an answer to poorly understood scientific questions. It restricts the set of questions that get asked. It restricts the space of ideas that people pursue. I think researchers are now starting to wake up to that fact.

In your paper, you also make the case that AI needs a better definition of intelligence in order to improve. Right now, you argue, researchers focus on benchmarking performance in static tests like beating video games and board games. Why do you find this measure of intelligence lacking?

The thing is, once you pick a measure, youre going to take whatever shortcut is available to game it. For instance, if you set chess-playing as your measure of intelligence (which we started doing in the 1970s until the 1990s), youre going to end up with a system that plays chess, and thats it. Theres no reason to assume it will be good for anything else at all. You end up with tree search and minimax, and that doesnt teach you anything about human intelligence. Today, pursuing skill at video games like Dota or StarCraft as a proxy for general intelligence falls into the exact same intellectual trap.

This is perhaps not obvious because, in humans, skill and intelligence are closely related. The human mind can use its general intelligence to acquire task-specific skills. A human that is really good at chess can be assumed to be pretty intelligent because, implicitly, we know they started from zero and had to use their general intelligence to learn to play chess. They werent designed to play chess. So we know they could direct this general intelligence to many other tasks and learn to do these tasks similarly efficiently. Thats what generality is about.

But a machine has no such constraints. A machine can absolutely be designed to play chess. So the inference we do for humans can play chess, therefore must be intelligent breaks down. Our anthropomorphic assumptions no longer apply. General intelligence can generate task-specific skills, but there is no path in reverse, from task-specific skill to generality. At all. So in machines, skill is entirely orthogonal to intelligence. You can achieve arbitrary skills at arbitrary tasks as long as you can sample infinite data about the task (or spend an infinite amount of engineering resources). And that will still not get you one inch closer to general intelligence.

The key insight is that there is no task where achieving high skill is a sign of intelligence. Unless the task is actually a meta-task, that involves acquiring new skills over a broad [range] of previously unknown problems. And thats exactly what I propose as a benchmark of intelligence.

If these current benchmarks dont help us develop AI with more generalized, flexible intelligence, why are they so popular?

Theres no doubt that the effort to beat human champions at specific well-known video games is primarily driven by the press coverage these projects can generate. If the public wasnt interested in these flashy milestones that are so easy to misrepresent as steps toward superhuman general AI, researchers would be doing something else.

I think its a bit sad because research should about answering open scientific questions, not generating PR. If I set out to solve Warcraft III at a superhuman level using deep learning, you can be quite sure that I will get there as long as I have access to sufficient engineering talent and computing power (which is on the order of tens of millions of dollars for a task like this). But once Id have done it, what would I have learned about intelligence or generalization? Well, nothing. At best, Id have developed engineering knowledge about scaling up deep learning. So I dont really see it as scientific research because it doesnt teach us anything we didnt already know. It doesnt answer any open question. If the question was, Can we play X at a superhuman level?, the answer is definitely, Yes, as long as you can generate a sufficiently dense sample of training situations and feed them into a sufficiently expressive deep learning model. Weve known this for some time. (I actually said as much a while before the Dota 2 and StarCraft II AIs reached champion level.)

What do you think the actual achievements of these projects are? To what extent are their results misunderstood or misrepresented?

One stark misrepresentation Im seeing is the argument that these high-skill game-playing systems represent real progress toward AI systems, which can handle the complexity and uncertainty of the real world [as OpenAI claimed in a press release about its Dota 2-playing bot OpenAI Five]. They do not. If they did, it would be an immensely valuable research area, but that is simply not true. Take OpenAI Five, for instance: it wasnt able to handle the complexity of Dota 2 in the first place because it was trained with 16 characters, and it could not generalize to the full game, which has over 100 characters. It was trained over 45,000 years of gameplay then again, note how training data requirements grow combinatorially with task complexity yet, the resulting model proved very brittle: non-champion human players were able to find strategies to reliably beat it in a matter of days after the AI was made available for the public to play against.

If you want to one day become able to handle the complexity and uncertainty of the real world, you have to start asking questions like, what is generalization? How do we measure and maximize generalization in learning systems? And thats entirely orthogonal to throwing 10x more data and compute at a big neural network so that it improves its skill by some small percentage.

So what would be a better measure of intelligence for the field to focus on?

In short, we need to stop evaluating skill at tasks that are known beforehand like chess or Dota or StarCraft and instead start evaluating skill-acquisition ability. This means only using new tasks that are not known to the system beforehand, measuring the prior knowledge about the task that the system starts with, and measuring the sample-efficiency of the system (which is how much data is needed to learn to do the task). The less information (prior knowledge and experience) you require in order to reach a given level of skill, the more intelligent you are. And todays AI systems are really not very intelligent at all.

In addition, I think our measure of intelligence should make human-likeness more explicit because there may be different types of intelligence, and human-like intelligence is what were really talking about, implicitly, when we talk about general intelligence. And that involves trying to understand what prior knowledge humans are born with. Humans learn incredibly efficiently they only require very little experience to acquire new skills but they dont do it from scratch. They leverage innate prior knowledge, besides a lifetime of accumulated skills and knowledge.

[My recent paper] proposes a new benchmark dataset, ARC, which looks a lot like an IQ test. ARC is a set of reasoning tasks, where each task is explained via a small sequence of demonstrations, typically three, and you should learn to accomplish the task from these few demonstrations. ARC takes the position that every task your system is evaluated on should be brand-new and should only involve knowledge of a kind that fits within human innate knowledge. For instance, it should not feature language. Currently, ARC is totally solvable by humans, without any verbal explanations or prior training, but it is completely unapproachable by any AI technique weve tried so far. Thats a big flashing sign that theres something going on there, that were in need of new ideas.

Do you think the AI world can continue to progress by just throwing more computing power at problems? Some have argued that, historically, this has been the most successful approach to improving performance. While others have suggested that were soon going to see diminishing returns if we just follow this path.

This is absolutely true if youre working on a specific task. Throwing more training data and compute power at a vertical task will increase performance on that task. But it will gain you about zero incremental understanding of how to achieve generality in artificial intelligence.

If you have a sufficiently large deep learning model, and you train it on a dense sampling of the input-cross-output space for a task, then it will learn to solve the task, whatever that may be Dota, StarCraft, you name it. Its tremendously valuable. It has almost infinite applications in machine perception problems. The only problem here is that the amount of data you need is a combinatorial function of task complexity, so even slightly complex tasks can become prohibitively expensive.

Take self-driving cars, for instance. Millions upon millions of training situations arent sufficient for an end-to-end deep learning model to learn to safely drive a car. Which is why, first of all, L5 self-driving isnt quite there yet. And second, the most advanced self-driving systems are primarily symbolic models that use deep learning to interface these manually engineered models with sensor data. If deep learning could generalize, wed have had L5 self-driving in 2016, and it would have taken the form of a big neural network.

Lastly, given youre talking about constraints for current AI systems, it seems worth asking about the idea of superintelligence the fear that an extremely powerful AI could cause extreme harm to humanity in the near future. Do you think such fears are legitimate?

No, I dont believe the superintelligence narrative to be well-founded. We have never created an autonomous intelligent system. There is absolutely no sign that we will be able to create one in the foreseeable future. (This isnt where current AI progress is headed.) And we have absolutely no way to speculate what its characteristics may be if we do end up creating one in the far future. To use an analogy, its a bit like asking in the year 1600: Ballistics has been progressing pretty fast! So, what if we had a cannon that could wipe out an entire city. How do we make sure it would only kill the bad guys? Its a rather ill-formed question, and debating it in the absence of any knowledge about the system were talking about amounts, at best, to a philosophical argument.

One thing about these superintelligence fears is that they mask the fact that AI has the potential to be pretty dangerous today. We dont need superintelligence in order for certain AI applications to represent a danger. Ive written about the use of AI to implement algorithmic propaganda systems. Others have written about algorithmic bias, the use of AI in weapons systems, or about AI as a tool of totalitarian control.

Theres a story about the siege of Constantinople in 1453. While the city was fighting off the Ottoman army, its scholars and rulers were debating what the sex of angels might be. Well, the more energy and attention we spend discussing the sex of angels or the value alignment of hypothetical superintelligent AIs, the less we have for dealing with the real and pressing issues that AI technology poses today. Theres a well-known tech leader that likes to depict superintelligent AI as an existential threat to humanity. Well, while these ideas are grabbing headlines, youre not discussing the ethical questions raised by the deployment of insufficiently accurate self-driving systems on our roads that cause crashes and loss of life.

If one accepts these criticisms that there is not currently a technical grounding for these fears why do you think the superintelligence narrative is popular?

Ultimately, I think its a good story, and people are attracted to good stories. Its not a coincidence that it resembles eschatological religious stories because religious stories have evolved and been selected over time to powerfully resonate with people and to spread effectively. For the very same reason, you also find this narrative in science fiction movies and novels. The reason why its used in fiction, the reason why it resembles religious narratives, and the reason why it has been catching on as a way to understand where AI is headed are all the same: its a good story. And people need stories to make sense of the world. Theres far more demand for such stories than demand for understanding the nature of intelligence or understanding what drives technological progress.

Original post:

Why video games and board games arent a good measure of AI intelligence - The Verge

New Findings Show Artificial Intelligence Software Improves Breast Cancer Detection and Physician Accuracy – P&T Community

CHICAGO, Dec. 19, 2019 /PRNewswire/ --A New York City based large volume private practice radiology group conducted a quality assurance review that included an 18 monthsoftware evaluation in the breast center comprised of nine (9) specialist radiologists using an FDA cleared artificial intelligence software by Koios Medical, Inc as a second opinion for analyzing and assessing lesions found during breast ultrasound examinations.

Over the evaluation period, radiologists analyzed over 6,000 diagnostic breast ultrasound exams. Radiologists used Koios DS Breast decision support software (Koios Medical, Inc.) to assist in lesion classification and risk assessment. As part of the normal diagnostic workflow, radiologists would activate Koios DS and review the software findings with clinical details to formulate the best management.

Analysis was then performed comparing the physicians' diagnostic performance to the 18-month period prior to the introduction of the AI enabled software. Comparing the two periods, physicians recommended biopsy for suspicious lesions at a similar rate (17%) and performed 14% more biopsies increasing the cancer detection rate (from 8.5 to 11.8 per 1,000 diagnostic exams) while simultaneously experiencing a significant reduction in benign biopsies (aka, false positives). Noteworthy is the aggregate nature of the findings where adoption of the software gradually increased over time during the 18-month evaluation period. Trailing 6-month results indicate a benign biopsy reduction exceeding 20% across the group. Positive predictive value, the percentage a positive test returns a positive result, improved over 20%.

"Physicians were skeptical in the beginning that software could help them given their years of training and specialization focusing on breast radiology. With experience using Koios software, however, over time and seeing the preliminary analysis they came to realize that the Koios AI software was gradually impacting patient care in a very positive way.Initially, radiologists completed internal studies that verified Koios software's accuracy, and discovered the larger impact happens gradually over time. In looking at the statistics, physicians were pleasantly surprised to see the benefit was even greater than expected. The software has the potential to make a profound impact on overall quality," says Vice President of Activations Amy Fowler.

Koios DS Breast 2.0 is artificial intelligence software designed around a dataset of over 450,000 breast ultrasound images with known results intended for use to assist physicians analyzing breast ultrasound images and aligns a machine learning-generated probability of malignancy. This probabilityis then checked against and aligned to the lesion's assigned BI-RADScategory, the scale physicians use to recommend care pathways.

"We are seeing the promise of machine learning as a physician's assistant coming to fruition. This will undoubtedly improve quality, outcomes, and patient experiencesand ultimately save lives. Koios DS Breast 2.0 is proving this within several physician groups across the US," says company CFO Graham Anderson.

Koios DS Breast 2.0 can be used in conjunction and integrated directly into most major viewing workstation platforms and is directly available on the LOGIQTME10, GE Healthcare's next generation digital ultrasound system that integrates artificial intelligence, cloud connectivity, and advanced algorithms. Artificial intelligence software generated results can be exported directly into a patient's record. Koios Medical continues to experiment with thyroid ultrasound image data and expects to add to its offering in the next year.

"We could not be more encouraged by the results these physicians are seeing. All our prior testing on historical images have consistently demonstrated high levels of system accuracy. Now, and for the first time ever, physicians using AI software as a second opinion with patients in real-time, within their practice, are delivering on the promise to measurably elevate quality of care. Catching more cancers earlier while reducing avoidable procedures and improving patient experiences is fast becoming a reality," says Koios Medical CEO Chad McClennan.

Discussing future plans during the recent Radiological Society of North America (RSNA) annual meeting in Chicago, McClennan shared, "Several major academic medical centers and community hospitals are utilizing our software and conducting studies into the quality impact for publication. We expect those results to mimic these early clinical findings and further validate the experience of our physician customers in both in New York City and across the country, and most importantly, the positive patient impact."

About KoiosMedical:

Koios Medical develops medical software to assist physicians interpreting ultrasound images and applies deep machine learning methods to the process of reaching an accurate diagnosis. The FDA cleared Koios DS platform uses advanced AI algorithms to assist in the early detection of disease while reducing recommendations for biopsy of benign tissue. Patented technology saves physicians time, helps improve patient outcomes, and reduces healthcare costs. Koios Medical is presently focused on breast and thyroid cancer diagnosis assistance market. Women with dense breast tissue (over 40% in the US) often require an alternative to mammography for diagnosis. Ultrasound is a widely available and effective alternative to mammography with no radiation and is standard of care for breast cancer diagnosis. To learn more please contact us at info@koiosmedical.comor (732) 529-5755.

Learn more about Koios at: koiosmedical.com

View original content to download multimedia:http://www.prnewswire.com/news-releases/new-findings-show-artificial-intelligence-software-improves-breast-cancer-detection-and-physician-accuracy-300978087.html

SOURCE Koios Medical

Continued here:

New Findings Show Artificial Intelligence Software Improves Breast Cancer Detection and Physician Accuracy - P&T Community