Archive for the ‘Artificial Intelligence’ Category

Artificial intelligence, possible recession driving record fraud rates … – Fox Business

Dr. Robert Marks discusses a Stanford survey that says 36% of researchers are concerned artificial intelligence could bring 'nuclear level catastrophe' on 'Kennedy.'

According to a new report, artificial intelligence (AI), a possible recession and a return to pre-pandemic activity are driving record fraud rates across the globe.

Pindrop, a global leader in voice technology, has released its annual Voice Intelligence & Security Report following an analysis of five billion calls and 3 billion fraud catches.

During an economic downturn, fraud is typically reported as a significant crime. The report claims historical data suggests that insurance claims and fraud will skyrocket in 2023.

ROMANCE SCAMS COST AMERICANS $1B IN 2022, A NEW RECORD

Photo illustration showing ChatGPT and OpenAI research laboratory logo and inscription at a mobile phone smartphone screen with a blurry background. Open AI is an app using artificial intelligence technology. (Nicolas Economou/NurPhoto via Getty Images / Getty Images)

With the pandemic winding down and economic conditions shifting, fraudsters have shifted focus away from government payouts and back to more traditional targets, such as contact centers.

But fraudsters are using new tactics to attack their old marks, including the use of personal user data acquired from the dark web, new AI models for synthetic audio generation and more. These factors have led to a 40% increase in fraud rates against contact centers in 2022 compared to the year prior.

The report found that fraudsters leveraging fast-learning AI models to create synthetic audio and content have already led to far-reaching consequences in the world of fraud. Although deepfakes and synthetic voices have existed for nearly 30 years, bad actors have made them more persuasive by pairing the tech with smart scripts and conversational speech.

Recently, Vice News used a synthetically generated voice with tools from ElevenLabs to utter a fixed passphrase "My Voice is My Password" and was able to bypass the voice authentication system at Lloyds Bank.

UNBRIDLED AI TECH RISKS SPREAD OF DISINFORMATION, REQUIRING POLICY MAKERS STEP IN WITH RULES: EXPERTS

Scammers will often resort to "phishing," which is a nefarious information gathering technique that uses fraud and trickery to fool people into handing over contact details, financial documents and payments. (iStock / iStock)

Arizona mother Jennifer DeStefano recounted a terrifying experience when phone scammers used AI technology to make her think her teenage daughter had been kidnapped.

The call came amidst a rise in "spoofing" schemes with fraudsters claiming that they have kidnapped loved ones to receive ransom money using voice cloning technology.

But, Pindrop says these technologies are not frequently used on the average citizen or consumer but are rather implemented in spearfishing schemes to attack high-profile targets, like CEOs and other C-suite executives.

For example, a bad actor or team of fraudsters could use a CEOs voice to ask another executive to wire millions of dollars for a fake offer to buy a company.

"It's actually the voice of the CEO, even in the case of the CEO having an accent, or even in the case that the CEO doesn't have public facing audio," Pindrop Co-Founder and CEO Vijay Balasubramanian told Fox News Digital.

This voice audio is typically derived from acquiring private recordings and internal all-hands messaging.

Pindrop notes that such tech could become more pervasive and help to inhibit other established fraud techniques.

CHATGPT FACING POTENTIAL DEFAMATION LAWSUIT AFTER FALSELY LABELING AUSTRALIAN MAYOR AS BRIBERY CONVICT

Verizon Business CEO Tami Erwin shares tips for protecting against cyber threats and encourages creating a security framework.

These include large-scale vishing/smishing efforts, victim social engineering, and (Interactive Voice Response) IVR reconnaissance. These tactics have caused permanent damage to brand reputations and forced consumer abandonment, according to Pindrop, resulting in the loss of billions of dollars.

Since 2020, these data breaches have affected over 300 million victims and data compromises are at an all-time high, with more than 1,800 events reported in 2021 and 2022 individually.

Furthermore, in 2021 and 2022, the number of reported data breaches reached an all-time high, with over 1,800 incidents yearly.

"It always starts with reconnaissance," Balasubramanian said.

IVR is the system companies use to guide users through their call center. For example, press one for billing information or press two for your balance. These systems have become more conversational because of AI.

CHATGPT BEING USED TO WRITE MALWARE, RANSOMWARE: REPORTS

A person receives a potential spam phone call on their cell phone. (iStock / iStock)

"They're taking a social security number that they have and they will go to every single bank and punch in that social security number. And the response of that system is one of two things. I don't recognize what that is, or hey, welcome thank you for being a valued customer. Your account balance is x," Balasubramanian said.

After acquiring all this account information, fraudsters target the accounts with the highest balances.

They then send a message saying there is a fraud charge with a convincing message, including information mined with bots from the IVR systems. The message then asks the account holder to divulge further information, such as a credit card number or CVV, which helps the fraudster finally access the account and remove funds.

Pindrop says companies need to detect voice liveness in sync with automatic speech recognition (ASR) and audio analytics to determine the speaker's environment and contextual audio to prevent synthetic voices, pitch manipulation, and replay attacks.

To prevent scams using synthetic voices, pitch manipulation and replay attacks, Pindrop says companies must also be capable of detecting voice liveness through automatic speech recognition (ASR) and audio analytics that determine the speaker's environment and contextual audio.

GET FOX BUSINESS ON THE GO BY CLICKING HERE

Unfortunately, research suggests that fraud rates in states that pose enhanced restrictions on the use of biometrics (such as California, Texas, Illinois, and Washington) are twice as likely to experience fraud. While these states enact such laws to protect consumer data, the legislation often makes no differentiation when it comes to company cybersecurity measures, which need voice analytics to adequately protect company and consumer data.

"If I target a consumer from those states, they most likely don't have advanced analytics performed on the voice, they are not looking for deep fakes. They are not checking if the voice is distorted," Balasubramanian said. "They aren't looking for any of that, so it's easy to steal money for those consumers."

Continue reading here:
Artificial intelligence, possible recession driving record fraud rates ... - Fox Business

The Artificial Intelligence Takeover Has Begun The Greyhound – The Greyhound

The following represents the opinion of the student reporter and does not represent the views of Loyola University Maryland, the Greyhound, or Loyola Universitys Department of Communication.

Large language models (LLMs) are here to stay. These are artificial intelligence systems that can generate natural, fluent and coherent text on any topic, given some input. They can also converse with humans, answer questions, write code and perform other tasks that require natural language understanding and generation.

Some of the most popular LLMs today are ChatGPT, Bard, and Bing. ChatGPT is developed by OpenAI, a research organization backed by tech luminaries like Elon Musk and Sam Altman. Bard is created by Google, based on its Language Model for Dialogue Applications (LaMDA). Bing is powered by Microsoft, using its own proprietary technology.

These LLMs have attracted millions of users who use them for various purposes, such as entertainment, education, productivity, and creativity. Some examples of how people use LLMs are:

Chatting with ChatGPT for fun, learning or companionship. ChatGPT can engage in casual conversations, tell jokes, stories and trivia, and even flirt with users. It can also adapt to different personalities and tones, depending on the users preferences.

Using Bard to generate ideas, summaries and content. Bard can help users with writing tasks, such as drafting emails, blog posts, presentations and essays. It can also provide suggestions, feedback and insights on various topics and domains.

Leveraging Bing to search for information, answers and solutions. Bing can not only provide relevant web results, but also generate natural language responses that explain the results or provide additional details. It can also solve problems, such as math equations, puzzles and quizzes.

The benefits of using LLMs are manifold. They can save time, enhance creativity, improve communication and expand knowledge. They can also provide entertainment, comfort and support. However, there are also some challenges and risks associated with LLMs, such as:

The quality and reliability of the generated text. LLMs are not always accurate or factual, as they rely on probabilistic methods and large amounts of data that may contain errors or biases. Users need to be aware of the limitations and uncertainties of LLMs, and verify the information they provide.

The ethical and social implications of the generated text. LLMs may produce text that is harmful, offensive or inappropriate, either intentionally or unintentionally. Users need to be responsible and respectful when using LLMs, and avoid generating or spreading text that may cause harm or offense to others.

The security and privacy of the user data. LLMs may collect and store user data, such as queries, responses and preferences, for improving their performance or providing personalized services. Users need to be aware of the data policies and practices of the LLM providers, and protect their personal information and identity.

LLMs are a powerful and promising technology that can transform the way we interact with information and each other. They offer many opportunities and benefits for users who want to explore new possibilities and enhance their capabilities. However, they also pose some challenges and risks that require caution and awareness from users who want to use them safely and ethically.

I would like to give Microsoft Bing a special thanks for writing all of that for me.

Here is the prompt I fed it to get that response: Write a NYT article about how LLMs like ChatGPT, Bard, and Bing are here to stay, and how people use them as a benefit. It took ten seconds to generate all of that text.

My Experience Using LLMs

Ive spent some time using all three platforms listed above. Both Microsoft Bing and ChatGPT are free to use to the public, but for Bard, you must join a waitlist. Of all three of them, ChatGPT-4 is the most advanced, followed by Bing and then Bard.

Bing is more informational and research based, while ChatGPT and Bard are more conversational based. You can feed them ludicrous tasks and it will fulfill them, as long as it isnt deemed offensive. The most creative thing I got ChatGPT to do was pretend to be Barack Obama giving a speech about the pandemic, but structured as if it was written like the King James version of the Bible. Needless to say, its pretty remarkable. The current version of OpenAIs platform is called ChatGPT-4. It only has access to information leading up to September 2021.

Microsoft Bing is much different, and I have been using it for a couple of weeks now. You can access it by using the Bing search engine, however, if you use Microsoft Edge as a browser, it has a dedicated button that you can press. Bing is separated into three options, the Chat feature, the Compose feature, and insights. I created the snippet above using the Compose feature, which allows you to do anything from paragraphs to emails, with different tones and lengths. However, the feature I find myself using the most is the Chat feature.

The Chat feature works differently than ChatGPT. First you select a tone of response: either creative, balanced, or precise. For the sake of my usage, I have been using precise. Afterward, you simply enter in whatever you want to know and it will scour the internet for responses and then generate a response based on sources that it pulls. The sources can be accessed either by clicking on the text, or by clicking the links at the bottom.

Bard by Google is like ChatGPT, except that it has access to the internet. However, it is by far the worst one. While it is able to generate responses faster than its competitors, the information is more often than not incorrect, and I have noticed that it has biases that can be considered problematic. Google says that Bard is experimental and a work in progress, and it is clear that this program certainly needs more work.

GPT in the Classroom

Based on the popularity, it is very clear that LLMs are here to stay. However, the quick rise in use has left educators scrambling to adapt. Some applications, like TurnItIn, are able to detect writing created by artificial intelligence. Yet, that may not dissuade some students from using it, so the question now becomes: Should LLMs like ChatGPT be outright banned, or should educators learn to adapt its usage into the classroom?

I believe that the latter is a much more practical answer for the classroom. ChatGPT and Bing should be treated as tools, just as search engines and databases are. As these programs are slowly being implemented into browsers and other services, to outright ban them would do more harm than good. But what is stopping students from using it to blatantly cheat? Trevor Oberlander 24 has a pessimistic view on this.

College degrees are worthless now because of ChatGPT, he said. If everyone is cheating, learning in class becomes redundant.

Oberlander, an economics major, says that GPAs can no longer be considered a unit of measure for school, since it is impossible to tell if legitimate work was done to receive the number. The amount of effort between students is incomparable if it is impossible to tell who actually put in effort to do work.

Lily Tiger 24 is also skeptical about LLMs. As an English major, she is very worried about the future of education and job security with her degree.

I went to a career fair and I asked if anyone has any positions for experience with research and writing skills, she said. If that is what ChatGPT can do then it makes me feel threatened.

Tiger is also considering a career path in high school education, so the prospect of a readily available and free answer machine makes her nervous for schooling in the future. She also believes that the education system needs to adapt or somehow integrate LLMs into the classroom.

We cant see it as a fear, because its already here. If we refuse to talk about it, that isnt a good idea. Teachers need to learn how to use this with their students as a resource, Tiger said.

Whats in store for the future?

While ChatGPT is currently a novel technology, in the five month timeframe between the release of GPT-3 and GPT-4, the abilities of the artificial intelligence have grown exponentially. In a recent interview on the Lex Fridman Podcast, Open AIs CEO, Sam Altman, stated that ChatGPT, will make a lot of jobs just go away.

I came across a TikTok recently that showed how GPT was integrated into a software program that creates over 100 professional grade headshots, based on a photo that the user submits. This is all done in mere hours. That right there is an industry killer, and the professional photography industry is not the only one affected so far. OpenAI recently announced that Shopify users will now have the ability to integrate a GPT-powered customer support representative on their online stores. Now, the customer support industry has been flattened as well.

ChatGPT is slowly creeping into every facet of our daily lives. Spotify now has a ChatGPT powered DJ that mimics a human DJ, and plays music tailored to your taste. Quizlet now has a GPT study partner, which asks you surprisingly in depth questions about flashcards that the user has provided. Even Snapchat has released a premium feature called My AI, which is a virtual friend that users can communicate within their Snapchat+ subscription. CNET, a company which writes news about technology and consumer electronics, revealed earlier this year that they have used artificial intelligence to write dozens of their articles.

Needless to say this phenomenon is not going away anytime soon. In fact, I would say that it is here to stay. So what is the solution for the AI takeover? Unfortunately, I do not think that there is an answer that would satisfy everyone. On one hand, LLMs are quickly proving to be an excellent resource for many. However, concerns about the ethics of using it are very real. And with Sam Altman outright saying that entire industries will be replaced by AI, the cause for concern is warranted.

A petition is currently circulating online which has been signed by the likes of Elon Musk and Apple co-founder Steve Wozniak. The petition is calling for the halt of AI development past ChatGPT-4 for at least six months. The main tenet of this petition is that Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.

The petition goes on to state that OpenAI recently announced that At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of computers used for creating new models. The signers believe that now is the time for them to do this. They are worried about the competitiveness of AI and the idea that we are quickly entering a stage in technological evolution, where AI could become human-like.

Im not one to toot the horn of someone like Elon Musk, however, we are clearly at a pivotal moment in the history of technology. It is incredible what ChatGPT and other LLMs can do, however, I feel that we as a society should tread lightly down the artificial intelligence road. Science fiction is full of stories regarding artificial intelligence takeover. While I do believe we are leagues away from that happening, we should still err on the side of caution.

Continued here:
The Artificial Intelligence Takeover Has Begun The Greyhound - The Greyhound

University World News: Artificial Intelligence Tools Offer … – Ole Miss News

ChatGPT in evaluation An opportunity for greater creativity?

By Natalie Simon

As debate rages over the possibilities and risks to higher education of artificial intelligence tools such as ChatGPT, evaluators are also asking what role AI and machine learning can play in their field.

Speaking at a virtualsymposiumhosted by the Centre for Research Evaluation at the University of Mississippi in the United States on March 24, independent evaluation consultant Silva Ferretti described ChatGPT as the perfect bureaucrat: pedantic and by the book.

The symposium was titled Are We at a Fork in the Road? and explored implications and opportunities for AI in evaluation. It was hosted by Dr. Sarah Mason of the University of Mississippi and Dr. Bianca Montrosse-Moorhead of the University of Connecticut, co-editors ofNew Directions for Evaluation, a publication of the American Evaluation Association.

They said that disciplines around the world were grappling with the question of whether ChatGPT heralded a fork in the road with respect to powerful new generative AI. This potential fork emerges because generative AI is distinct from earlier AI models in that it can create entirely new content.

Read the complete report here.

Read more:
University World News: Artificial Intelligence Tools Offer ... - Ole Miss News

This is a war and artificial intelligence is more dangerous than a T-80 tank. Unlike a tank its in e… – The US Sun

A GERMAN magazines world exclusive interview with paralysed F1 legend Michael Schumacher. Fake.

A stunning photograph given first place and handed a prestigious Sony World Photography Award. Never taken.

And a banger of a new song called Heart On My Sleeve featuring Drake and The Weeknd dropped on streaming services. Never recorded.

Welcome to another crazy 24 hours in the world of artificial intelligence, where truth and disinformation collide.

Die Aktuelle, a weekly German gossip magazine, splashed a Schumacher interview across its cover when the content of it was actually created by an AI chatbot designed to respond like Schumacher might.

Berlin artist Boris Eldagsen revealed his photo submitted to a high-profile photography competition was dreamt up by artificial intelligence.

This came just after a new song purportedly by Drake was pulled from streaming services by Universal Music Group for infringing content created with generative AI.

These controversies followed on from provocative AI-generated images of Frances President Emmanuel Macron being arrested and of an incandescent Donald Trump being manhandled by American police.

All beamed around the world to a believing audience.

Thats not to mention a super-realistic shot of the Pope resplendent in a massive white puffer coat.

This one even fooled broadcaster and seasoned journalist Andrew Marr, as I found out in a recent conversation with him.

Such images are created by AI technology with the simple push of a button, with entire scenes generated from nothing.

The growing threat posed by generative artificial intelligence technologies is upon us.

Not long ago, it would have been simple to distinguish between real and fake images but it is now almost impossible to spot the difference.

The simplicity of producing these photographs, interviews, songs and soon videos means that platforms that dont put measures against them will be flooded.

These technologies and deepfakes are clear and present threats to democracy and are being seized upon by propagandist regimes to supercharge their agenda and drown out truth.

You could fake an entire political movement, for example.

This is a new war we need to fight, a war on artificial truth and the inequality of truth around the world.

It is time to restore trust. Soon, we will lose the ability to have reasonable online discourse if we cant have a shared sense of reality.

These forgeries are so sophisticated that millions of people globally could be simultaneously watching and believing a speech that Joe Biden never gave.

Nation states will have to reimagine how they govern in a world where their communication to the public will be, by default, disbelieved.

One of the biggest issues we have in social media is that content is user-uploaded and it is nearly impossible to track its origin.

Was the upload taken by an iPhone? Was it heavily Photoshopped? Was it a complete fabrication generated by AI? We dont know its veracity.

Information warfare is now a front, right alongside conventional warfare.

During the Ukraine conflict, we have been faced with a barrage of manipulated media.

There have been deepfake videos of President Zelensky where he says he is resigning and surrendering. It doesnt get more serious than that.

These are dangerous weapons which can have devastating consequences.

And unlike T-80 tanks, the weapons of this front are in everyones hands.

To counter all of this, a number of us computer scientists are creating technologies that help build trust.

Ours is FrankliApp.com, a content platform where we can definitively say that every piece of photography and video is not edited, faked or touched up in any way.

We need more of this and the right regulation to ensure it happens.

As investor Ian Hogarth told Radio 4 yesterday: Theres currently more regulation on selling a Pret sandwich than there is in building super-intelligence.

AI companies should be forced to open source their models and allow anyone to check if a piece of content was created by their service.

We also need regulations that make platforms disclose a particular photo or videos digital provenance.

There is some precedent for this as France orders disclosure of fashion photo edits. We need this in all sectors.

The conjured images of Trump, Macron and many others have now been seen and believed by millions worldwide on platforms that dont care whether what they are promoting is real or not.

Thats just plain wrong.

The world needs a solution to this tsunami of distortion.

We must shine a light on the truth, and nothing but the truth, delivering authenticity in this age of disinformation.

See the rest here:
This is a war and artificial intelligence is more dangerous than a T-80 tank. Unlike a tank its in e... - The US Sun

Artificial intelligence makes its way into Nebraska hospitals and clinics – Omaha World-Herald

In November 2021, doctors at Midwest Gastrointestinal Associates in Omaha got what might be considered a new assistant.

Called GI Genius, the new computer-aided system was designed to help doctors performing colonoscopies identify in real time suspicious tissue that might be a polyp, or precancerous lesion in the colon.

The Medtronic device puts a little green box on any spot it thinks might be a polyp, using the same display screen a doctor is watching while navigating the colons twists and turns and searching for suspicious spots.

Finding and removing the lesions is important because it decreases a patients risk of developing colon cancer, said Dr. Jason Cisler, a gastroenterologist and the practices quality chairman. Studies have shown that doctors find more polyps if they have two people looking at the screen.

People are also reading

After adopting the system, the groups already good adenoma detection rate the rate at which doctors find and remove polyps during screening colonoscopies went up 10% across the board, putting the practice at more than double the national standard. Every 1% increase in the detection rate, according to one study, decreases patients risk of colon cancer by 3%.

It makes it a more sensitive screening tool, Cisler said. And what were doing is screening. If were able to prevent more colon cancer, thats the rationale where were at today.

The device, approved by the Food and Drug Administration in early 2021, uses a type of artificial intelligence. And its just one of a number of technologies incorporating various forms of artificial intelligence that are already working behind the scenes in Nebraska hospitals and clinics. And with research and development underway around the world, there will be more.

Some are focused on flagging doctors about needed health screenings and identifying hospitalized patients at higher risk of being readmitted to the hospital or developing potentially life-threatening infections. Others monitor patients at risk of falling and analyze the impact of blockages in heart arteries on blood flow.

AI also is being used to take some mundane tasks off the plates of both clerical staff and health care providers, freeing them to do higher-level work.

Some Nebraska Medicine doctors are using a product called Dragon Ambient eXperience, or DAX, from a company called Nuance, to capture conversations between themselves and patients and create notes in patients charts, said Scott Raymond, the health systems chief information and innovation officer. The physician then reviews and accepts the notes. Some physicians notes now are proving accurate with no need for further human intervention between 80% and 90% of the time.

Its a great use of the technology, he said. Its taking away physician burnout, the burden of documentation ... where (they) feel theyre practicing medicine and not being documentation specialists.

Lincolns Bryan Health plans to go live with the system in early May. We think that will (be) a tremendous win for both our patients and our physicians, said Bridgett Ojeda, that systems chief information officer.

Raymond said Microsoft plans to put the artificial intelligence chatbot ChatGPT behind the next version of the program. ChatGPT, developed by OpenAI, has been making headlines around the world in recent months. Users would have to decide whether to adopt it.

Such technologies are making it a fun time to be in health care information technology, Ojeda said. Technologists have spent the last two decades getting information out of paper files and into electronic systems. Now AI and large language models like ChatGPT are allowing them to begin using that data to benefit patients.

Indeed, the authors of a 2022 report from the National Academy of Medicine on AI in health care said their hope is that AI will be the payback for investments in electronic systems.

They caution, however, that such systems could introduce bias if not carefully trained and create concerns about privacy and security.

Raymond acknowledged that standards and guardrails need to be put around the technology, particularly when it comes to the chatbots.

Ojeda noted that other challenges lie in having enough health care data and engineering experts to put the technology to work in ways that help rather than disrupt. With interest and investment in the sector high, they have to focus on selecting tools that will be sustainable and ultimately benefit patients.

But Dr. Steven Leitch, vice president of clinical informatics with CHI Health, stressed that humans, not machines, still are making the decisions.

What would make it scary is if we dont make the human in charge, he said. And thats not what health care is about. Doctors and nurses make decisions in health care. Its between people. These tools are amendments; theyre only going to be assisting where we allow them to assist.

Raymond, who previously practiced as a pediatric intensive care nurse, said Nebraska Medicine and the University of Nebraska Medical Center are forming a committee to consider how the health system will use chatbot technology in research, education and clinical care.

Its happening in medicine, he said. Its happening slowly and carefully with a lot of thought behind it. I think it will change how we deliver care and it will improve care. Our responsibility is to make sure we use the technology in the right way.

The term artificial intelligence, however, implies that machines are reasoning the way humans do, he said. Theyre not, although theyre good at gathering data, learning from it and starting to glean insights.

In actuality, Leitch said, what most people think of as artificial intelligence really is a broader category that includes a lot of different tools, including machine learning, robotic process automation and the chatbots natural language processing. Even chatbots, however, arent having independent thoughts but rather are running very complex sets of rules.

Cisler said the GI Genius system, also in place at Methodist Endoscopy Center, which is owned by Midwest and Methodist, has been trained on millions of images from colonoscopies and is constantly updated.

But the final word on whether what the system flags actually is a polyp rather than a bubble or fold in the colon lies with the doctor, he said.

Such systems, however, also can help sort patients in other ways, and in doing so, make it more likely they get the care they need.

Hastings Family Care in Hastings, Nebraska, part of Mary Lanning Healthcare, recently began using Eyenuks EyeArt technology, a special camera connected to a computer backed by machine learning that allows providers to screen patients with diabetes for diabetic retinopathy, without dilating their eyes.

Hastings Family Care in Hastings, Nebraska, a primary care clinic that's part of Mary Lanning Healthcare, is using a new device that uses a type of artificial intelligence to screen patients with diabetes for diabetic retinopathy, without dilating their eyes. It's one example of the kinds of artificial intelligence technologies that are already working behind the scenes in Nebraska hospitals and clinics.

People who have diabetes are advised to have their eyes checked once a year for the condition, which can cause vision loss and blindness. Early treatment can stop progression.

But Jessica Sutton, clinic manager, said a lot of diabetics dont get the annual exams, often due to a lack of vision insurance, transportation or time to get to an eye doctor. The clinic saw 980 patients with diabetes last year, 45% of whom had not had the exam. Funding for the equipment came through a local donor and a grant UNMC received to improve diabetic care in rural areas.

Dr. Zachary Frey, director of primary care, said he saw three such patients Wednesday morning. One didnt have insurance. The other two hadnt had an eye exam in a while. Having the device allows the clinic staff to catch such patients when theyre already in the office.

Frey said the system essentially provides three results, each of which triggers next steps. If no problem is detected, the patient is cleared until the next year. If the scan shows changes suggesting retinopathy, the patient is referred to an eye doctor for further investigation. If it detects vision-threatening retinopathy, the patient is sent to a retina specialist.

People who have diabetes are advised to have their eyes checked once a year for diabetic retinopathy, which can cause vision loss and blindness. Early treatment can stop progression. Here, Hastings Family Care is using a new device that uses a type of AI to screen patients with diabetes for the condition, without dilating their eyes.

The systems also can be used to keep patients from falling through the cracks in other ways.

Methodist, for instance, has several systems aimed at helping put additional eyes on lung scans.

One searches radiology reports from scans of, say, the abdomen, that incidentally catch part of the lung for key words like nodule. Those get sent to a team that determines whether there might be a problem, and if so, contacts the patients doctor, even those in other health systems, said Dr. Adam Wells, a pulmonologist with Methodist Physicians Clinic and Methodist Hospital.

That incidental nodule program flagged more than 13,000 scans last year, which triggered nearly 1,000 communications with a physician and ongoing follow-up with more than 700 patient scans, he said. Those identified nearly 30 cancers.

The health system also screens patients with a known risk for lung cancer using low-dose CT scans, Wells said. While radiologists read the scans, an AI program reads behind and categorizes any spots it sees. Nearly 20 cancers were identified last year out of more than 2,300 scheduled screening scans.

Cancer is a common focus. Locally, the Omaha-based MRI medical device company Bot Image, founded by entrepreneur Randall Jones, last year received FDA clearance for an AI-driven software system called ProstatID for detection and diagnosis of prostate cancer.

But there are others. Leitch said CHI uses robotic process automation, or bots, which use sets of rules to identify patients with upcoming visits and check if theyre due for a test, such as a lung cancer screening.

If so, it places a pending order in the patients electronic medical record. If the doctor and patient decide its not the right time for the test, the provider can remove it. But it takes the burden off the doctor to remember every test a patient might need, particularly on busy days with lots of distractions.

Other systems can be used to help monitor hospitalized patients. Bryan for several years has used a fall-prevention system developed by Lincoln-based Ocuvera, Ojeda said. It uses 3-D cameras and an algorithm to predict patient movement and alert nurses before a fall can occur.

Epic Systems, she said, has developed five different predictive models that monitor hospitalized patients for other risks, including sepsis and hospital readmission, and alert clinicians so they can respond quickly.

Health systems that use Epics health records, including Bryan, CHI and Nebraska Medicine, can then adopt them and build them out for their patient populations, she said.

One of the latest, which CHI has adopted and Bryan is developing, is a model that helps predict when patients will be no-shows for clinic appointments.

If providers can head off missed appointments by, say, Leitch said, providing transportation, they can keep patients healthier.

If we do what the evidence shows us, as we learn more and more, its going to make it easier for us to deliver the care the right way every time, Leitch said.

From flooding to drought to infectious diseases, the adverse health effects of climate change already are evident in Nebraska, experts say. And they warn that 'those changes will only get greater.'

Just like other Americans, Nebraskans are feeling the pinch of prescription drug cost increases.

Simulation centers and high-tech mannequins let Nebraska doctors and medical students practice procedures before they attempt them on real people.

A Omaha mom who specializes in 3D imaging arranged to get a 3D rendering of the scans of her son's brain so her husband could see where the son's tumor was situated.

Older Americans now can join, switch or drop a Medicare plan or change Medicare Part D drug coverage or Medicare Advantage plans for the coming year.

Seven Nebraska organizations formed to take better care of their patients' health and reduce costs all performed better than the U.S. average on satisfaction and quality measures.

The data reported by public health agencies in Nebraska has ebbed and flowed over the course of the COVID-19 pandemic.

Kayla Northup's family is pretty healthy, but when her kids do get sick, it's often at an inconvenient time, such as on a vacation.

Lincoln-based Bryan Health officials want to set up a center that would provide virtual nurses to help rural hospitals across Nebraska with staffing issues.

Jeremy Nordquist, president of the Nebraska Hospital Association, said hospitals still are seeing a staff vacancy rate of somewhere between 10% and 15%, with some as high as 20%.

Just before the COVID pandemic broke out, UNMC's Global Center for Health Security received a grant from the CDC to strengthen infection control training, education and tools.

The pandemic forced medical professionals, including Nebraska-based researchers and physicians, to innovate. Some innovations likely will be around for good.

Joanna Halbur of Project Harmony, a child advocacy center in Omaha, said noticeable changes in a child's behavior -- such as a normally outgoing child acting more reserved -- can be signs of anxiety or depression.

Experts say suicide rates often drop following major disasters, such as the 2019 floods in Nebraska, before experiencing an uptick.

Nebraska has reached a "cultural crisis point" in mental health availability, experts say, as long waitlists and a shortage in providers persists.

The COVID pandemic has brought extra attention to the health care world. To help readers learn about how health care is evolving, we offer Health Matters in the Heartland.

The pandemic accelerated a shift to more outpatient or same-day surgeries and sped the expansion of telehealth, among other changes, Nebraska health care leaders say.

You know losing that extra weight would be good for your health. Your health care team talked with you about how obesity increases your risk of other health issues, such as heart disease, diabetes, high blood pressure and certain types of cancer.

Sign up here to get the latest health & fitness updates in your inbox every week!

Here is the original post:
Artificial intelligence makes its way into Nebraska hospitals and clinics - Omaha World-Herald