Archive for the ‘Artificial Intelligence’ Category

LVHN to Implement Artificial Intelligence Software Into Radiology … – LVHN News

Aidoc

This AI technology will immediately read all imaging studies and look for three emergent conditions:

These conditions have the potential to be life-threatening. If any of these conditions are identified, Aidocs technology promptly notifies the radiology team. This urgent notification enables the prioritization and prompt confirmation of the relevant study by a radiologist, which is vital for maximizing patient safety.

In addition to prioritizing the studys read, if a pulmonary embolism is detected, Aidocs AI notifies the hospitals pulmonary embolism response team so the patient can receive multidisciplinary and lifesaving care immediately.

Aidocs solution also provides access to a leading-edge AI algorithm (Riverain) for detecting lung nodules (potentially precancerous lesions), which increases accuracy and provides radiologists with instant decision support.

This software will automatically generate study impressions from a radiologists dictation. It will also automatically compile and insert follow-up guidelines for incidental findings (lung nodules, thyroid nodules, adrenal lesions, kidney cysts, enlarged lymph nodes and more) from national medical organizations into reports, standardizing recommendations for additional care and evaluation. This process reduces manual data entry for radiologists and allows them to focus on higher level duties to improve patient safety and quality.

When incidental findings (lung nodules, thyroid nodules, adrenal lesions, kidney cysts, enlarged lymph nodes and more) are identified, this software will automatically send follow-up recommendations to the patient as well as their referring clinician and regularly check in with both the patient and clinician until the follow-up appointments or tests are scheduled.

Follow this link:
LVHN to Implement Artificial Intelligence Software Into Radiology ... - LVHN News

‘Day of AI’ Spurs Classroom Discussions on Societal Impacts of … – Education Week

Several thousand students worldwide participated in the second annual Day of AI on May 18, yet another sign of artificial intelligences growing significance to schools.

Its been a year of extraordinary advancements in AI, and with that comes necessary conversations and concerns about who and what this technology is for, said event organizer Cynthia Breazeal, who is the director of the Responsible AI for Social Empowerment and Education (RAISE) initiative at the Massachusetts Institute of Technology.

Americas K-12 schools are already using artificial intelligence for everything from personalizing student learning to conducting classroom observations, as Education Week described in a special report earlier this month. A coalition of influential groups such as Code.org and the Educational Testing Service recently launched an effort to help schools and state education departments integrate artificial intelligence into curricula, and the International Society for Technology in Education has made related learning opportunities available to students and teachers alike.

The RAISE initiative at MIT builds on those efforts by offering free classroom lessons on such topics as What Can AI Do? and ChatGPT in School. Overall, said MIT doctoral student Daniella DiPaola, who helped develop the Day of AI curriculum, the approach is to weave ethical, social, and policy considerations throughout technical explanations. Central to that aim is fostering discussion of the Blueprint for an AI Bill of Rights released by the White Houses Office of Science and Technology Policy (OSTP) in late 2022.

We want to make sure societal impact is part of the process, DiPaola said.

Thats exactly what the White House hoped to spur, said Marc Aidinoff, who helped lead the creation of the Bill of Rights during his time as OSTPs chief of staff. Aidinoff spent the Day of AI working with a group of Massachusetts middle and high school students debating potential legislation for regulating the use of artificial intelligence in schools.

Unlike the adults who talk about AI as this unknowable, all-powerful thing and let their fear take over, the students all treated AI as a knowable thing thats complicated, but we can take action on, he said afterward.

Aidinoff said he particularly appreciated the MIT RAISE initiatives focus on engaging artificial intelligence as a potentially helpful companion, rather than a threat or silver-bullet solution. One benefit of that approach, he said, is an emphasis on considering specific use cases and threats rather than getting paralyzed by amorphous fears. Thinking about how AI can best support humans also encourages discussions of general themes and principles such as fairness that teachers are already accustomed to exploring with their students.

That sentiment was echoed by Kristen Thomas Clarke, a literacy and information technology teacher at the private Media-Providence Friends School in Pennyslvania. Now in her eighth year at the school, Thomas Clarke said shes long mixed digital citizenship and media literacy activities into her lessons on coding and robotics. But in the wake of ChatGPT s emergence this year, she and her head of school decided that a broader school-wide discussion of artificial intelligence was warranted.

That included use of MITs curriculum, which Thomas Clarke praised as highly interactive and effective at helping students see both the promise and potential pitfalls of AI, including discrimination that can result from biased training data.

But the most important impact, she said, was on the adults at her school.

I think our initial reaction [to ChatGPT] was maybe a little bit of fear, like what are the kids going to do with this? Thomas Clarke said. But now I think of it more in terms of enhancing their knowledge than doing their homework for them.

See the original post:
'Day of AI' Spurs Classroom Discussions on Societal Impacts of ... - Education Week

Will artificial intelligence replace doctors? – Harvard Health

Q. Everyone's talking about artificial intelligence, and how it may replace people in various jobs. Will artificial intelligence replace my doctor?

A. Not in my lifetime, fortunately! And the good news is that artificial intelligence (AI) has the potential to improve your doctor's decisions, and to thereby improve your health if we are careful about how it is developed and used.

AI is a mathematical process that tries to make sense out of massive amounts of information. So it requires two things: the ability to perform mathematical computations rapidly, and huge amounts of information stored in an electronic form words, numbers, and pictures.

When computers and AI were first developed in the 1950s, some visionaries described how they could theoretically help improve decisions about diagnosis and treatment. But computers then were not nearly fast enough to do the computations required. Even more important, almost none of the information the computers would have to analyze was stored in electronic form. It was all on paper. Doctors' notes about a patient's symptoms and physical examination were written (not always legibly) on paper. Test results were written on paper and pasted in a patient's paper medical record. As computers got better, they started to relieve doctors and other health professionals from some tedious tasks like helping to analyze images electrocardiograms (ECGs), blood samples, x-rays, and Pap smears.

Today, computers are literally millions of times more powerful than when they were first developed. More important, huge amounts of medical information now are in electronic form: medical records of millions of people, the results of medical research, and the growing knowledge about how the body works. That makes feasible the use of AI in medicine.

Already, computers and AI have made powerful medical research breakthroughs, like predicting the shape of most human proteins. In the future, I predict that computers and AI will listen to conversations between doctor and patient and then suggest tests or treatments the doctor should consider; highlight possible diagnoses based on a patient's symptoms, after comparing that patient's symptoms to those of millions of other people with various diseases; and draft a note for the medical record, so the doctor doesn't have to spend time typing at a computer keyboard and can spend more time with the patient.

All of this will not happen immediately or without missteps: doctors and computer scientists will need to carefully evaluate and guide the development of new AI tools in medicine. If the suggestions AI provides to doctors prove to be inaccurate or incomplete, that "help" will be rejected. And if AI then does not get better, and fast, it will lose credibility. Powerful technologies can be powerful forces for good, and for mischief.

Read more:
Will artificial intelligence replace doctors? - Harvard Health

Health Tech Startup Suki Is Using Artificial Intelligence To Make Patient Records More Accessible To Every Doctor – Forbes

records easier and more accessible.Google Images

On its website, healthcare tech startup Suki AI touts its Suki Speech Platform as the most intelligent and responsive voice platform in healthcare. The company builds software intended to assist doctors in more easily and efficiently complete patient documentation in patients electronic health records, or EHR. The idea is simple: by making charting faster and more accessiblethis is accessibility too, especially for doctors with certain conditions of their ownthe more physicians can shift their energy from the bureaucratic aspect of medicine to the actual practice of the profession. After all, doctors spend a kings ransom on medical school to help people, not push pencils on their behalf.

In a press release issued this week, the Bay Area-based company announced a partnership with EHR maker Epic that entails deep integration of Sukis AI-powered voice assistant tech with Epics records tech. Suki notes its eponymous Suki Assistant helps clinicians complete time-consuming administrative tasks by voice and recently announced the ability to generate clinical notes from ambiently listening to a patient-clinician conversation; the integration enables notes to automatically be sent back to Epic, updating the relevant sections.

Ambient documentation holds great promise for reducing administrative burden and clinician burnout, and we are delighted to work with Epic to deliver a sophisticated, easy-to-use solution to its client base, said Suki CEO Punit Soni in a prepared statement. Suki Assistant represents the future of AI-powered voice assistants, and we are thrilled that it is integrated with Epic through its ambient APIs.

In an interview with me conducted over email ahead of the announcement, Soni explained Sukis mission is to make healthcare tech invisible and assistive so that clinicians can focus on patient care. The conduit through which Soni and team accomplishes their mission is their core product in the Suki Assistant. According to Soni, the companys origin story began when he spotted a big hole in the health tech market. Clinician burnout, he said, continues to be a major problem in the industry as society reconciles with a pandemic-addled world. To that point, Soni pointed to a statistic gleaned from a recent study that found 88% of doctors dont recommend their profession to their children. Soni feels the sobering reality is indicative of societal and financial problems. I believe that when utilized properly, AI and voice technologies can transform healthcare and help relieve administrative burdens, he said. Suki has spent years investing in our technology to develop a suite of solutions that reduce burnout, improve the quality of care, and increase [the return on investment] for healthcare systems.

When asked how the Suki Assistant works at a technical level, Soni told me its the only product on the market that integrates with commonly used EHRs like Epic to create a seamless workflow for physicians. He went on to tell me the company has used generative AI and large language models in training the Suki software; one of the teams overarching goals was to build an assistant that could (reasonably) understand natural language. The team didnt want people to have to memorize some rote syntax, akin to interacting with a pseudo-sentient command line. Clinicians can ask queries like Whos my next patient? or Suki, whats my schedule? Moreover, users can dictate notes to the Assistant and ask it to show a list of a patients allergies. Our goal is to make Suki as intuitive and easy to use as possible and we use the latest technologies in voice and AI to do so, Soni said. Using Suki should be as easy as picking up a phone, opening the app, and speaking naturally to it. Theres a lot of tech under the hood to enable that experience.

The dots between AI and healthcare and accessibility are easy to connect. For one thing, as I alluded in the lede, its certainly plausible for a doctor to have a physical conditioncarpal tunnel, for instancethat make doing administrative work like updating charts not merely a matter of drudgery, but of disability as well. Maybe using a pen or pencil even a few minutes causes the carpal tunnel to flare up, not to mention the eye strain and fatigue that could conceivably surface. Suki clearly doesnt position anything they build expressly for accessibility, yet its obvious the Suki Assistant has as much relevance as an assistive technology as more consumer-facing digital butlers like Siri and Alexa. The bottom line, at least in this context, is many doctors will not only work better if they use Suki to maintain patient records. The truth is, theyll feel better too as a side effect of doing their jobs more efficiently.

Feedback on the Suki Assistant, Soni said, has been really positive. He cited a large healthcare system using Epic as its health records provider being amazed at how well Suki pulls up schedules and how it integrates with Epics software. He also noted peoples pleasure with Sukis ambient note-taking capability. All told, Soni said people in the field are immensely enjoying the Suki tech in their day-to-day lives, adding they appreciate the freedom and flexibility Suki offers because now they can do their notes [and more] anywhere they have their phonethey dont have to be in front of their computers anymore.

Ultimately, what Soni and his team have done is harness AI to do genuine good for the world by making record-keeping not simply more efficient but accessible tooin a way not dissimilar to how Apples just-announced Personal Voice and Point to Speak accessibility features change the usability game. As Soni explained, artificial intelligence and machine learning is just tech. Its soulless, inanimate, inhuman.

By itself, [AI] doesnt solve anything, he said.

Soni continued: Sukis primary value is that every pixel in the company is [created] in service of the clinician. That culture is what makes us different. Anyone can build a product, but the special sauce that makes it useful is empathy. That is the magic that is a key part of Suki.

Looking ahead, Soni is tantalized by the possibilities for his work.

Our mission is to make healthcare technology invisible and assistive so clinicians can focus on what they love: patient care. We want to be able to help every clinician who needs more time back and we are just scratching the surface of what we can do, he said of his companys future. There are so many potential applications of our technology, from simplifying the orders process to helping nurses complete their tasks by voice to enabling clinicians to answer patient portal messages by voice. We have an ambitious, exciting roadmap of features were working on, and I cant wait to show this work to the world.

Steven is a freelance tech journalist covering accessibility and assistive technologies, and is based in San Francisco. His work has appeared in such places as The Verge, TechCrunch, and Macworld. Hes also appeared on podcasts, NPR, and television.

See the original post:
Health Tech Startup Suki Is Using Artificial Intelligence To Make Patient Records More Accessible To Every Doctor - Forbes

Reviving the Past with Artificial Intelligence – Caltech

While studying John Singer Sargent's paintings of wealthy women in 19th-century society, Jessica Helfand, a former Caltech artist in residence, had an idea: to search census records to find the identities of those women's servants. "I thought, What happens if I paint these women in the style of John Singer Sargent?' It's a sort of cultural restitution," Helfand explained, "reverse engineering the narrative by reclaiming a kind of beauty, style, and majesty."

To recreate a style from history, she turned to technology that, increasingly, is driving the future. "Could AI help me figure out how to paint, say, lace or linen, to capture the folds of clothing in daylight?" Helfand discussed her process in a seminar and discussion moderated by Hillary Mushkin, research professor of art and design in engineering and applied science and the humanities and social sciences. The event, part of Caltech's Visual Culture program, also featured Joanne Jang, product lead at DALL-E, an AI system that generates images based on user-supplied prompts.

While DALL-E has a number of practical applications from urban planning, to clothing design, to cooking, the technology also raises new questions. Helfand and Jang spoke about recent advancements in generative AI, ethical considerations when using such tools, and the distinction between artistic intelligence and artificial intelligence.

More here:
Reviving the Past with Artificial Intelligence - Caltech