Archive for the ‘Ai’ Category

Beauty Standards Make Me Ashamed Of My Features & AI Makes It Worse – Refinery29

Beauty Standards Make Me Ashamed Of My Features & AI Makes It Worse  Refinery29

Read the original:

Beauty Standards Make Me Ashamed Of My Features & AI Makes It Worse - Refinery29

Tags:

Microsoft collaborates with Mass General Brigham and University of WisconsinMadison to further advance AI foundation models for medical imaging -…

Collaborations empower the healthcare industry to create a vast array of medical imaging copilot applications that will help enhance radiologists experiences and contribute to better patient outcomes

REDMOND, Wash. July 24, 2024 Microsoft Corp. on Wednesday announced collaborations with leading academic medical systems Mass General Brigham and the University of Wisconsin School of Medicine and Public Health along with its partnering health system, UW Health, to accelerate solving some of the biggest challenges in radiology and further advance AI in medical imaging to drive clinician efficiency and enable better health outcomes. The collaborations will foster research and innovation tied to the advancement of high-performing multimodal AI foundation models that empower the entire radiology ecosystem to build on top of the secure Microsoft Azure AI platform and extend the Nuance (a Microsoft company) suite of radiology applications, delivering a wide array of high-value medical imaging copilot applications.

Medical imaging plays a crucial role in healthcare. Health systems spend an estimated $65 billion each year on imaging,[1] and approximately 80% of all hospital and health system visits include at least one imaging exam related to more than 23,000 conditions.[2] Faced with challenges that the overall healthcare industry grapples with, including physician burnout and staffing shortages, healthcare organizations are looking to generative AI to help reduce workloads, enhance workflow efficiencies, and improve the accuracy and consistency of medical image analysis for care delivery, clinical trials recruitment and drug discovery. Generative AI in radiology also may help enhance patient experiences by reducing wait times for imaging results, further opening up access to care and improving the quality of care.

With the right multimodal data-enriched medical imaging foundation models, Microsoft and its partners will explore how advanced algorithms and applications can help radiologists and other clinicians interpret medical images, as well as assist with report generation, disease classification and structured data analysis.

Microsoft has long been focused on the potential of providing high-performing first- and third-party advanced foundation models and copilot experiences across the ecosystem to empower everyone on the planet to achieve more. Additionally, Microsoft has been an innovator in the space of medical imaging research and biomedical natural language processing, collaborating with experts in medicine to democratize AI and empower researchers, hospitals, life science organizations and healthcare providers to develop new models and systems.

Through these collaborations, researchers and clinicians at Mass General Brigham, UW School of Medicine and Public Health, and UW Health will work with Microsoft to further advance state-of-the-art multimodal foundation models. The organizations will collaborate on the development, testing and validation of the latest breakthrough technology, deploying real-world use cases into clinical workflows3 including via Nuances PowerScribe radiology reporting platform, used by the majority of radiologists in the U.S., and the Nuance Precision Imaging Network, which offers a single point of access to automate and scale use of third-party medical imaging AI models for a range of modalities and specialties.

Generative AI has transformative potential to overcome traditional barriers in AI product development and to accelerate the impact of these technologies on clinical care. As healthcare leaders, we need to carefully and responsibly develop and evaluate such tools to ensure high-quality care is in no way compromised, said Keith J. Dreyer, D.O., Ph.D., chief data science officer and chief imaging officer at Mass General Brigham and leader of the Mass General Brigham AI business. Foundation models fine-tuned on Mass General Brighams vast multimodal longitudinal data assets can enable a shorter development cycle of AI/ML-based software as a medical device and other clinical applications, for example, to automate the segmentation of organs and abnormalities in medical imaging and increase radiologists efficiency and consistency.

Our institutions have a reputation for embracing technical innovations as opportunities to lead the transformation of our field with new scientific discovery and improvement in clinical care, said Scott Reeder, M.D., Ph.D., chair of the Department of Radiology, University of Wisconsin School of Medicine and Public Health, and radiologist at UW Health. We are excited to collaborate with Microsoft on the development, validation and thoughtful clinical investigation of generative AI in the medical imaging space. Our focus is to bridge the gap within medical imaging from innovation to patient care in ways that improve outcomes and make innovative care more accessible.

We are proud to announce our expanded collaborations with leading institutions like Mass General Brigham and UW. Along with other industry partners, our joint efforts aim to leverage the power of imaging foundation models to improve experiences and workflow efficiency across the radiology ecosystem in a way that is reliable, transparent and secure, said Peter Durlach, corporate vice president, Microsoft Health and Life Sciences. Together, we are not only advancing medical imaging, but also helping deliver more accessible and better-quality patient care in a very resource-constrained environment.

The industry continues to see rapid advancements in generative AI in radiology and other imaging specialties. With these advances comes an even greater responsibility to prioritize patient privacy and build systems guided by Microsofts Responsible AI principles. In addition to building our own AI systems responsibly and in ways that warrant peoples trust, we empower our customers with tools and features to do the same. We invest in our customers responsible AI goals in three ways:

The collaborations with Mass General Brigham, UW and many other industry partners aim to accelerate the development of high-performing foundation models for medical imaging that support and enable the greater healthcare ecosystem in a way that adheres to Microsofts responsible AI principles. Read more in our 2024 Responsible AI Transparency Report.

Microsoft (Nasdaq MSFT @microsoft) creates platforms and tools powered by AI to deliver innovative solutions that meet the evolving needs of our customers. The technology company is committed to making AI available broadly and doing so responsibly, with a mission to empower every person and every organization on the planet to achieve more.

1 JAMA.2012;307(22):2400-2409. doi:10.1001/jama.2012.5960

2 Healthcare, Definitive. Healthcare Analytics & Provider Data | Definitive Healthcare. Definitive Healthcare Database of Hospitals & Healthcare Providers, http://www.definitivehc.com

3 Subject to appropriate regulatory approvals

For more information, press only:

Microsoft Media Relations, WE Communications for Microsoft, (425) 638-7777,[emailprotected]

Note to editors: For more information, news and perspectives from Microsoft, please visit Microsoft Source athttps://news.microsoft.com/source. Web links, telephone numbers and titles were correct at time of publication but may have changed. For additional assistance, journalists and analysts may contact Microsofts Rapid Response Team or other appropriate contacts listed athttps://news.microsoft.com/microsoft-public-relations-contacts.

See original here:

Microsoft collaborates with Mass General Brigham and University of WisconsinMadison to further advance AI foundation models for medical imaging -...

NSF announces new AI test beds initiative to advance safety and security of AI technologies – National Science Foundation (.gov)

The U.S. National Science Foundation announces the launch of a new initiative that will invest in the development of artificial intelligence-ready test beds, a critical infrastructure designed to propel responsible AI research and innovation forward. These test beds, or platforms, will allow researchers to study new AI methods and systems in secure, real-world settings. The initiative calls for planning grants from the research community to accelerate the development of the test beds.

The initiative is aligned with Executive Order 14410 on the "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," signed in October 2023, underscoring the importance of creating a robust ecosystem for AI development that prioritizes safety, security and trustworthy considerations. The executive order emphasizes the extraordinary potential of AI to address urgent societal challenges while also highlighting the risks associated with possible irresponsible use. Recognizing this dual potential, NSF's AI-ready test beds initiative is set to enhance and advance the essential infrastructure needed for researchers to develop, test and refine responsible AI systems in real-world settings.

"Artificial intelligence holds incredible promise for advancing numerous fields, but its development must be guided by rigorous testing and evaluation in applications that involve decisions about, or contact with, people in the real world," said NSF Director Sethuraman Panchanathan. "With this initiative, NSF is demonstrating its commitment to innovate in AI and also ensure that those innovations are safe, secure and beneficial to society and our economy."

AI-ready test beds create an environment where AI researchers can deploy and assess the impact of their work and study the societal and economic impacts of AI-powered solutions, including various risks related to security, safety, privacy and fairness. For example, an AI-ready test bed may enable a researcher to evaluate a new AI solution for decision-making in a transportation scenario, or a test bed could allow an AI researcher to create new weather models and visualizations and assess them with meteorologists in the field. The infrastructure allows the researcher to innovate safely and collect real-world evidence that is beneficial to the intended users.

Projects funded by the initiative will lay the framework for providing researchers with scalable, real-world environments to test novel AI methods and their impacts. These test beds will support interdisciplinary collaborations, bringing together private AI laboratories, academia, civil society and third-party evaluators to support the design, development and deployment of AI systems, including associated privacy-enhancing technologies.

The initiative will offer planning grants to cultivate research teams that actively address the expansion or enhancement of an existing test bed to evaluate the impact on and interaction with users of novel AI methods. These grants will facilitate the collection of preliminary data, team formation, design efforts and the development of governance and management plans for scalable AI-ready test beds.

NSF encourages proposal submissions from institutions in the Established Program to Stimulate Competitive Research (EPSCoR) jurisdictions and collaborative proposals led by NSF EPSCoR institutions. This approach aims to engage a wide array of perspectives and scientific talent in addressing national AI research challenges and opportunities.

To learn more, read the Dear Colleague Letter: https://www.nsf.gov/pubs/2024/nsf24111/nsf24111.jsp

Read more:

NSF announces new AI test beds initiative to advance safety and security of AI technologies - National Science Foundation (.gov)

PLTR Stock Outlook: Is Palantirs AI Hype Worth the Premium? – InvestorPlace

Source: Ascannio / Shutterstock.com

No technology company of our time inspires as much hope and fear as Palantir (NASDAQ:PLTR). This is true for its technology, a deep learning database used primarily by the military. Its also true for PLTR stock and its prospects.

Bulls see Palantir worth $50 per share, bears barely $10. (It was selling at $26 on July 24.)

When it comes to technology, optimists see Palantir as a revolutionary defender of freedom and a magic bullet for productivity. Pessimists see Palantir as an overhyped, dangerous, authoritarian scam.

I believe its none of those things. I also believe its a speculative buy for a young investor.

Lets take a closer look.

Source: shutterstock.com/Tex vector

Palantir happily took the label of Artificial Intelligence after ChatGPT arrived. At the time the stock was selling for under $8 and the company was struggling to define itself as a data analysis company.

Its more of a Machine Internet company. It combines whats known about all your assets and offers strategic insights into deploying them, in real time.

The Pentagon loves it. Every month, it seems, Palantir is bagging another major contract, with more secret information being made available to more people. The latest is a $480 million, five-year deal for Maven, fusing data from intelligence, surveillance and reconnaissance systems.

Palantirs software can identify and optimize what our side has while identifying and strategizing against what the other side has. Dan Ives of Wedbush sees it worth $50 per share.

There are also civilian applications, for both government and commercial accounts. The company has strong relationships with both Oracle (NASDAQ:ORCL) and Microsoft (NASDAQ:MSFT). Its ability to coordinate hospital work won it a deal with Englands National Health Service last year.

Source: Dejan Lazarevic / Shutterstock.com

The pessimistic view starts with Palantir being primarily a military contractor.

Most military contractors have limited growth but are highly profitable because its difficult to get out of the military box. Palantir grew just 17% last year and has only been marginally profitable for about a year. While it was earning profits last year, it also had very negative cash flow, $1.78 billion worth.

Palantirs selling point with the military is that it is highly-proprietary system. Thats great if youre in the secrets business. Its not so great if youre a hospital or if something is broken and you need to fix it.

While other defense software contractors sell for 13-15 times sales, Palantir sells for closer to 26 times sales, even amid the latest sell-off. Its also vulnerable to what Gartner (NYSE:IT) calls the trough of disillusionment, the realization that AI may not fully justify the current hype.

CEO Alex Karp, who despite doing a good job seems overpaid at $1.1 billion, brags about Palantirs commercial revenue growth in his most recent stockholder letter, but its still just 24% of the business. Palantir remains and likely will always remain a military-first company. Thats why analysts have been saying it is priced to perfection. Thats analyst-speak for limited upside.

Source: Poetra.RH / Shutterstock.com

Most AI companies remain focused on the interface between people and data. I like the fact that Palantir is focused on the interface between machines and data.

Its this interface that gives Palantir value and should give speculators at sites like Stocktwits hope. The best AI systems today arent focused on replacing people so much as doing what people cant. People cant yet penetrate the fog of war.

Its by sticking to a clear, coherent strategy that the best companies, and generals, win. Palantir has that. The question is whether it has enough runway, earned serving the war machine, to justify its valuation.

This depends on its ability to grow the commercial side of the business. Look closely at those numbers when it next reports Aug. 5. If theyre good, go long.

On the date of publication, the responsible editor did not have (either directly or indirectly) any positions in the securities mentioned in this article.

As of this writing, Dana Blankenhorn had a LONG position in MSFT. The opinions expressed in this article are those of the writer, subject to theInvestorPlace.comPublishing Guidelines.

Dana Blankenhorn has been a financial and technology journalist since 1978. He is the author of Technologys Big Bang: Yesterday, Today and Tomorrow with Moores Law, available at the Amazon Kindle store. Write him at danablankenhorn@gmail.com, tweet him at @danablankenhorn, or subscribe to his free Substack newsletter.

See the article here:

PLTR Stock Outlook: Is Palantirs AI Hype Worth the Premium? - InvestorPlace

NIH findings shed light on risks and benefits of integrating AI into medical decision-making – National Institutes of Health (NIH) (.gov)

News Release

Tuesday, July 23, 2024

AI model scored well on medical diagnostic quiz, but made mistakes explaining answers.

Researchers at the National Institutes of Health (NIH) found that an artificial intelligence (AI) model solved medical quiz questionsdesigned to test health professionals ability to diagnose patients based on clinical images and a brief text summarywith high accuracy. However, physician-graders found the AI model made mistakes when describing images and explaining how its decision-making led to the correct answer. The findings, which shed light on AIs potential in the clinical setting, were published in npj Digital Medicine. The study was led by researchers from NIHs National Library of Medicine (NLM) and Weill Cornell Medicine, New York City.

Integration of AI into health care holds great promise as a tool to help medical professionals diagnose patients faster, allowing them to start treatment sooner, said NLM Acting Director, Stephen Sherry, Ph.D. However, as this study shows, AI is not advanced enough yet to replace human experience, which is crucial for accurate diagnosis.

The AI model and human physicians answered questions from the New England Journal of Medicine (NEJM)s Image Challenge. The challenge is an online quiz that provides real clinical images and a short text description that includes details about the patients symptoms and presentation, then asks users to choose the correct diagnosis from multiple-choice answers.

The researchers tasked the AI model to answer 207 image challenge questions and provide a written rationale to justify each answer. The prompt specified that the rationale should include a description of the image, a summary of relevant medical knowledge, and provide step-by-step reasoning for how the model chose the answer.

Nine physicians from various institutions were recruited, each with a different medical specialty, and answered their assigned questions first in a closed-book setting, (without referring to any external materials such as online resources) and then in an open-book setting (using external resources). The researchers then provided the physicians with the correct answer, along with the AI models answer and corresponding rationale. Finally, the physicians were asked to score the AI models ability to describe the image, summarize relevant medical knowledge, and provide its step-by-step reasoning.

The researchers found that the AI model and physicians scored highly in selecting the correct diagnosis. Interestingly, the AI model selected the correct diagnosis more often than physicians in closed-book settings, while physicians with open-book tools performed better than the AI model, especially when answering the questions ranked most difficult.

Importantly, based on physician evaluations, the AI model often made mistakes when describing the medical image and explaining its reasoning behind the diagnosis even in cases where it made the correct final choice. In one example, the AI model was provided with a photo of a patients arm with two lesions. A physician would easily recognize that both lesions were caused by the same condition. However, because the lesions were presented at different angles causing the illusion of different colors and shapes the AI model failed to recognize that both lesions could be related to the same diagnosis.

The researchers argue that these findings underpin the importance of evaluating multi-modal AI technology further before introducing it into the clinical setting.

This technology has the potential to help clinicians augment their capabilities with data-driven insights that may lead to improved clinical decision-making, said NLM Senior Investigator and corresponding author of the study, Zhiyong Lu, Ph.D. Understanding the risks and limitations of this technology is essential to harnessing its potential in medicine.

The study used an AI model known as GPT-4V (Generative Pre-trained Transformer 4 with Vision), which is a multimodal AI model that can process combinations of multiple types of data, including text and images. The researchers note that while this is a small study, it sheds light on multi-modal AIs potential to aid physicians medical decision-making. More research is needed to understand how such models compare to physicians ability to diagnose patients.

The study was co-authored by collaborators from NIHs National Eye Institute and the NIH Clinical Center; the University of Pittsburgh; UT Southwestern Medical Center, Dallas; New York University Grossman School of Medicine, New York City; Harvard Medical School and Massachusetts General Hospital, Boston; Case Western Reserve University School of Medicine, Cleveland; University of California San Diego, La Jolla; and the University of Arkansas, Little Rock.

The National Library of Medicine (NLM) is a leader in research in biomedical informatics and data science and the worlds largest biomedical library. NLM conducts and supports research in methods for recording, storing, retrieving, preserving, and communicating health information. NLM creates resources and tools that are used billions of times each year by millions of people to access and analyze molecular biology, biotechnology, toxicology, environmental health, and health services information. Additional information is available at https://www.nlm.nih.gov.

About the National Institutes of Health (NIH): NIH, the nation's medical research agency, includes 27 Institutes and Centers and is a component of the U.S. Department of Health and Human Services. NIH is the primary federal agency conducting and supporting basic, clinical, and translational medical research, and is investigating the causes, treatments, and cures for both common and rare diseases. For more information about NIH and its programs, visit http://www.nih.gov.

NIHTurning Discovery Into Health

Qiao Jin, et al. Hidden Flaws Behind Expert-Level Accuracy of Multimodal GPT-4 Vision in Medicine. npj Digital Medicine. DOI: 10.1038/s41746-024-01185-7 (2024).

###

See the article here:

NIH findings shed light on risks and benefits of integrating AI into medical decision-making - National Institutes of Health (NIH) (.gov)