Media Search:



Artificial intelligence in veterinary medicine: What are the ethical and … – American Veterinary Medical Association

Artificial intelligence (AI) and machine learning, a type of AI that includes deep learning, which produces data with multiple levels of abstraction, are emerging technologies that have the potential to change how veterinary medicine is practiced. They have been developed to help improve predictive analytics and diagnostic performance, thus supporting decision-making when practitioners analyze medical images. But unlike human medicine, no premarket screening of AI tools is required for veterinary medicine.

This raises important ethical and legal considerations, particularly when it comes to conditions with a poor prognosis where such interpretations may lead to a decision to euthanize, and makes it even more vital for the veterinary profession to develop best practices to protect care teams, patients, and clients.

That's according to Dr. Eli Cohen, a clinical professor of diagnostic imaging at the North Carolina State College of Veterinary Medicine. He, presented the webinar, "Do No Harm: Ethical and Legal Implications of A.I.," which debuted in late August on AVMA Axon, AVMA's digital education platform.

During the presentation, he explored the potential of AI to increase efficiency and accuracy throughout radiology, but also acknowledged its biases and risks.

The use of AI in clinical diagnostic imaging practice will continue to grow, largely because much of the dataradiographs, ultrasound, CT, MRI, and nuclear medicineand their corresponding reports are in digital form, according to a Currents in One Health paper published in JAVMA in May 2022.

Dr. Ryan Appleby, assistant professor at the University of Guelph Ontario Veterinary College, who authored the paper, said artificial intelligence can be a great help in expediting tasks.

For example, AI can be used to automatically rotate or position digital radiographs, produce hanging protocolswhich are instructions for how to arrange images for optimal viewingor call up report templates based on the body parts included in the study.

More generally, AI can triage workflows by taking a first pass at various imaging studies and prioritize more critical patients to the top of the queue, said Dr. Appleby, who is chair of the American College of Veterinary Radiology's (ACVR) Artificial Intelligence Committee.

That said, when it comes to interpreting radiographs, not only does AI need to identify common cases of a disease, but it must also bring up border cases as well to ensure patients are being treated accurately and for it to be useful.

"As a specialist, I'm there for the subset of times when there is something unusual," Dr. Cohen said, who is co-owner of Dragonfly Imaging, a teleradiology company, where he serves as a radiologist. "While AI will get better, it's not perfect. We need to be able to troubleshoot it when it doesn't perform appropriately."

Medical device developers must gain Food and Drug Administration (FDA) approval for their devices and permission to sell their product in the U.S. Artificial intelligence and machine learning-enabled medical devices for humans are classified by the FDA as medical devices.

However, companies developing medical devices for animals are not required to undergo a premarket screening, unlike those developing devices for people. The ACVR has expressed concern about the lack of oversight for software used to read radiographs.

"It is logical that if the FDA provides guidelines and oversight of medical devices used on people, that similar measures should be in place for veterinary medical devices to help protect our pets," said Dr. Tod Drost, executive director of the American College of Veterinary Radiology. "The goal is not to stifle innovation, but rather have a neutral third party to provide checks and balances to the development of these new technologies."

Massive amounts of data are needed to train machine-learning algorithms and training images must be annotated manually. Because of the lack of regulation for AI developers and companies, it's not a requirement for companies to provide information about how their employees trained or validated their products. Many of these algorithms are often referred to as operating in a "black box."

"That raises pretty relevant ethical considerations if we're using these to make diagnoses and perform treatments," Dr. Cohen said.

Because AI doesn't have a conscience, he said, those who are developing and using AI need to have a conscience and can't afford to be indifferent. "AI might be smart, but that doesn't mean it's ethical," he said.

In the case of black-box medicine, "there exists no expert who can provide practitioners with useful causal or mechanistic explanations of the systems' internal decision procedures," according to a study published July 14, 2022, in Frontiers.

Dr. Cohen says, "As we adopt AI and bring it into veterinary medicine in a prudent and intentional way, the new best practice ideally would be leveraging human expertise and AI together as opposed to replacing humans with AI."

He suggested having a domain expert involved in all stages of AIfrom product development, validation, and testing to clinical use, error assessment, and oversight of these products.

The consensus of multiple leading radiology societies, including the American College of Radiology and Society for Imaging Informatics in Medicine, is that ethical use of AI in radiology should promote well-being and minimize harm.

"It is important that veterinary professionals take an active role in making medicine safer as use of artificial intelligence becomes more common. Veterinarians will hopefully learn the strengths and weaknesses of this new diagnostic tool by reviewing current literature and attending continuing education presentations," Dr. Appleby said.

Dr. Cohen recommends veterinarians obtain owner consent before using AI in decision making, particularly if the case involves a consult or referral. And during the decision-making process, practitioners should be vigilant about AI providing a diagnosis that exacerbates human and cognitive biases.

"We need to be very sure that when we choose to make that decision, that it is as validated and indicated as possible," Dr. Cohen said.

According to a 2022 Veterinary Radiology & Ultrasound article written by Dr. Cohen, if not carefully overseen, AI has the potential to cause harm. For example, an AI product could produce a falsepositive diagnosis, leading to tests or interventions, or lead to falsenegative results, possibly delaying diagnosis and care. It could also be applied to inappropriate datasets or populations, such as applying an algorithm to an ultrasound on a horse that gathered information from small animal cases.

He added that veterinary professionals need to consider if it is ethical to shift responsibility to general practitioners, emergency veterinarians, or non-imaging specialists who use a product whose accuracy is not published or otherwise known.

"How do we make sure there is appropriate oversight to protect our colleagues, our patients, and our clients, and make sure we're not asleep at the wheel as we usher in this new tech and adopt it responsibly?" Dr. Cohen asked.

See the article here:
Artificial intelligence in veterinary medicine: What are the ethical and ... - American Veterinary Medical Association

Artificial Intelligence: The third-party candidate – The Miami Hurricane

Creativity, confusion and controversy have defined the introductory stages of artificial intelligence integration into our society. When it comes to political campaigns and the upcoming 2024 election, this combination is changing the way politicians sway public opinion.

In June 2023, presidential candidate and Florida governor Ron DeSantis campaign used AI to generate images of his opponent, former president Donald Trump, with Anthony Fauci, a premier target of the Republican party base for his response to the COVID-19 pandemic.

The video, posted on X, displayed a collection of images of Trump and Fauci together. Some are real photographs, but three are AI-generated photos of the two embracing.

Lawmakers fear the use of deceiving AI images could potentially cause some voters to steer away from candidates in 2024.

There are two ways politicians are using it, said Dr. Yelena Yesha, UM professor and Knight Foundation Endowed Chair of Data Science and AI. One is biasness, trying to skew information and change the sentiments of populations, and the other is the opposite effect, using blockchain technology that will control misinformation.

Conversations about regulating the dangers of AI have already begun circulating on Capitol Hill, starting with the U.S. Senate hearing on May 16, 2023. The hearing included Sam Altman, CEO of OpenAI, who expressed concern of potential manipulation of his companys technology to target voters.

The most notable OpenAI technology is ChatGPT, which has seen the most rapid user consumption rate in internet history, surpassing the success of applications like TikTok and Instagram in its first two months.

The platform initially banned political campaigns from using the chatbot, but its enforcement of the ban has since been limited.

An analysis by The Washington Post found that ChatGPT can bypass its campaign restriction ban when prompted to create a persuasive message that targets a specific voter demographic.

AI will certainly be used to generate campaign content, said UM professor of political science Casey Klofstad. Some will use it to create deepfakes to support false narratives. Whether this misinformation will influence voters is an open question.

Deep fakes, an enhanced form of AI that alters photo and video, has reached the political mainstream. Following President Bidens re-election announcement last April, the Republican National Committee (RNC) released a fully AI-generated ad depicting a fictional and dystopian society if Biden is re-elected in 2024.

Congress has furthered its efforts in establishing boundaries for AI, with Senate Majority Leader Chuck Shumer (D-NY) recently leading a closed-door meeting on Sept. 13 with high-profile tech leaders, including Elon Musk and Mark Zuckerberg.

The goal of this meeting was to gather information on how prominent big tech platforms could enforce oversight within the use of AI. Senate sessions on the matter will continue throughout the fall, with Schumer hopeful for bipartisan support and legislation across Congress.

I would be reluctant to see the government take a heavy hand in regulating AI, but policy could be tailored more narrowly to incentivize AI developers to inform consumers about the source and validity of AI-generated content, Klofstad said.

The extent to which the federal government can have major influence over regulating AI is unclear as artificial intelligence continues to develop.

It should be regulated, but it should not to the point where the progress can be slowed down by regulatory processes, Yesha said. If you have too much regulation, it may at a certain point decelerate science and the adoption of innovation.

A significant reason for AI regulation efforts stems from the anticipation of foreign influence in our elections. Russian-led misinformation campaigns played a part in the 2016 election, and elected officials foresee advancement of foreign meddling in tandem with AIs improvement.

At a certain point, as AI becomes more developed, if it falls in the wrong hands of totalitarian regimes or autocratic governments, it can have a negative effect on our homeland. Yesha said.

However, AIs applications do provide numerous benefits for political campaigns.

A prominent benefit of AI in the political arena is its messaging capabilities. With a chatbots ability to instantly regurgitate personalized messages when fed consumer data, essentially taking over the work of lower-level campaign staff, the ability to garner donor support is vastly expanded.

Campaigns have always adapted to new modes of communication, from the printing press, to electronic mailing lists, to websites, text messaging and social media. Klofstad said. I expect AI will not be different in this regard.

Go here to see the original:
Artificial Intelligence: The third-party candidate - The Miami Hurricane

Artificial intelligences future value in environmental remediation – The Miami Hurricane

Artificial intelligence is enabling us to rethink how we integrate information, analyze data and use the resulting insights to improve decision-making. The power of AI is revolutionizing various industries, and environmental science is no exception.

With increasing threats of environmental stressors, AI is emerging as a powerful tool in detecting, mapping and mitigating these effects for the future.

As AI increasingly drives innovation and becomes a facet of everyday life, fears about its capabilities are growing.

It doesnt help that the media and pundits are stoking those fears, suggesting that AI could take over the world, lead to losses of control and privacy and devalue the importance of humans in the workforce.

According to Business News Daily, 69% of people worry that AI could take over their jobs entirely, while 74% predict that AI will eliminate all forms of human labor. However, its potential to remedy environmental problems can be a beneficial use of the technology.

From monitoring air and water quality to predicting the spread of pollutants, AI is already playing a crucial role in safeguarding our environment and public health.

As 2030 quickly approaches, the agreed deadline for hitting climate targets, the world is on track to achieve only 12 percent of the Sustainable Development Goals (SDGs), with progress plateauing or regressing on over half of the set goals.

How can we use artificial intelligence the technology that is revolutionizing the production of knowledge to actually improve lives; to make the world a little bit safer, a little bit healthier, a little bit more prosperous; to help eliminate poverty and hunger; to promote health and access to quality education; to advance gender equity; to save our planet, said Secretary of State of the United States Anthony Blinken, at the 78th Session of the United Nations General Assembly.

The most prominent applications of AI are currently in detecting, mapping and mitigating environmental toxins and pressures, which can help engineers and scientists gather more accurate data, but its uses are constantly growing and developing.

AI can help automate the process of taking and analyzing samples, and recognizing the presence of specific toxins in water, soil or air, so it can report real-time status. In delicate ecosystems, such as coral reefs and wetlands, including those around Florida, studying the parameters of the environment can alert to harmful conditions and propel action.

AI models can also create analytical maps based on historical or statistical data to understand trends and trajectories regarding toxin levels, weather patterns, human activities and other relevant factors. Those models can also evaluate satellite imagery to identify areas where specific conditions may be present and be trained to recognize patterns or changes, which can be extremely important in forecasting future dangerous weather events, enhancing agricultural productivity to combat hunger, responding to disease outbreaks, and addressing other imminent climate change threats to Earth.

These technologies can be also used to identify the sources and pathways of toxins and optimize mitigation strategies, crucial for effective mitigation and intervention, while monitoring the success of mitigation efforts.

If these practices for AI are deployed effectively and responsibly, they can drive inclusive and sustainable growth for all, which can reduce poverty and inequality, advance environmental sustainability and improve lives around the world.

However, real concerns exist that the developing world is being left behind as AI advances rapidly. If not distributed equitably, the technology has the potential to exacerbate inequality.

Countries must work together to promote access to AI around the world, with a particular focus on developing countries. Industrialized nations should share knowledge that can advance progress toward achieving SDGs, as AI has the potential to advance progress on nearly 80 percent of them.

To succeed in directing AI toward achieving the SDGs, complete support and participation from the multistakeholder community of system developers, governments and organizations, and communities is required.

Meanwhile, the need for AI governance is imperative, and support from federal and state governments as well as corporations is crucial to this transition. As AIs footprint grows and nations work to manage risks, we must maximize its use for the greater good and deepen cooperation across governments to foster beneficial uses for AI.

The United States is committed to supporting and accelerating efforts on AI development, hoping to foster an environment where AI innovation can continue to flourish. Secretary Blinken mentioned the U.S.s creation of a blueprint for an AI Bill of Rights and Risk Management Framework at the UNGA, which would guide the future use, design and safeguards for these systems.

The US has announced a $15 million commitment, designated to helping more governments leverage the power of AI to drive global good, focused specifically on the SDGs. Commitments and contributions have been made by other countries and large corporations, such as Google, IBM and Microsoft.

We are at an inflection point, and the decisions we make today will affect the world for decades to come, especially when it comes to AI and climate change. AI has the potential to accelerate progress, an immense responsibility to be taken by governments, the private sector, civil society and individuals that must consider the social, economic and environmental aspects of sustainability.

Lia Mussie is a senior majoring in ecosystem science and policy and political science with minors sustainable business and public health.

Read more:
Artificial intelligences future value in environmental remediation - The Miami Hurricane

Researchers develop a way to hear photos using artificial intelligence – KXLH News Helena

Researchers at Northeastern University have developed a way to extract audio from both still photos and muted videos using artificial intelligence.

The research project is calledSide Eye.

Most of the cameras today have what's called image stabilization hardware, said Kevin Fu, a professor of electrical and computer engineering at Northeastern University. It turns out that when you speak near a camera lens that has some of these functions, a camera lens will move every so slightly, what's called modulating your voice, onto the image and it changes the pixels.

Basically, these small movements can be interpreted into rudimentary audio that Side Eye artificial intelligence can then interpret into individual words with high accuracies, according to the research team.

You're able to get thousands of samples per second. What does this mean? It means you basically get a very rudimentary microphone, Fu said.

SEE MORE: Companies plan to build largest image-based AI model to fight cancer

Even though the recovered audio sounds muffled, some pieces of information can be extracted.

Things like understanding what is the gender of the speaker, not on camera but in the room while the photograph or video is being taken, that's nearly 100% accurate, he said.

So what can technology like this be used for?

For instance in legal cases or in investigations of either proving or disproving somebodys presence, it gives you evidence that can be backed up by science of whether somebody was likely in the room speaking or not, Fu said.

This is one more tool we can use to bring authenticity to evidence, potentially to investigations, but also trying to solve criminal applications, he said.

Trending stories at Scrippsnews.com

Read more here:
Researchers develop a way to hear photos using artificial intelligence - KXLH News Helena

AI is already helping astronomers make incredible discoveries … – Space.com

World Space Week 2023 is here and Space.com is looking at the current state of artificial intelligence (AI) and its impact on astronomy and space exploration as the space age celebrates its 66th anniversary. Here, Paul Sutter discusses how AI is already helping astronomers make new, incredible discoveries.

Whether we like it or not, artificial intelligence will change the way we interact with the universe.

As a science, astronomy has a long tradition of looking for patterns by sifting through massive amounts of data, accidental discoveries, and a deep connection between theory and observation. These are all areas where artificial intelligence systems can make the field of astronomy faster and more powerful than ever before.

That said, it's important to note that "artificial intelligence" is a very broad term encompassing a wide variety of semi-related software tools and techniques. Astronomers most commonly turn to neural networks, where the software learns about all the connections in a training data set, then applies the knowledge of those connections in a real data set.

Related: How artificial intelligence is helping us explore the solar system

Take, for instance, data processing. The pretty pictures splashed online from the Hubble Space Telescope or James Webb Space Telescope are far from the first pass that those instruments took of that particular patch of sky.

Raw astronomical images are full of errors, messy foregrounds, contaminants, artifacts, and noise. Processing and cleaning these images to make something presentable not to mention useful for scientific research requires an enormous amount of input, usually done partially manually and partially by automated systems.

Increasingly astronomers are turning to artificial intelligence to process the data, pruning out the useless bits of the images to produce a clean result.For example, an image of the supermassive black hole at the heart of thegalaxy Messier 87 (M87) first released in 2019 was given a machine learning "makeover" in April 2023, resulting in a much clearer image of the black hole's structure.

In another example, some astronomers will feed images of galaxies into a neural network algorithm, instructing the algorithm with the classification scheme for the discovered galaxies. The existing classifications came from manual assignments, either by the researchers themselves or by volunteer citizen science efforts. Training set in hand, the neutral network can then be applied to real data and automatically classify the galaxies, a process that is far faster and much less error prone than manual classification.

Astronomers can also use AI to remove the optical interference created by Earth's atmosphere from images of space taken by ground-based telescopes.

AI has even been proposed to help us spot signatures of life on Mars, understand why the sun's corona is so hot, or reveal the ages of stars.

Astronomers are also using neural networks to dig deeper into the universe than ever before. Cosmologists are beginning to employ artificial intelligence to understand the fundamental nature of the cosmos. Two of the biggest cosmic mysteries are the identities of dark matter and dark energy, two substances beyond our current knowledge of physics that combined take up over 95% of all the energy contents throughout the universe.

To help identify those strange substances, cosmologists are currently trying to measure their properties: How much dark matter and dark energy there is, and how they've changed over the history of the universe. Tiny changes in the properties of dark matter and dark energy have profound effects on the resulting history of the cosmos, touching everything from the arrangement of galaxies to the star formation rates in galaxies like our Milky Way.

Neural networks are aiding cosmologists in disentangling all the myriad effects of dark matter and dark energy. In this case, the training data comes from sophisticated computer simulations. In those simulations cosmologists vary the properties of dark matter and dark energy and see what changes. They then feed those results into the neural network so it can discover all the interesting ways that the universe changes. While not quite yet ready for primetime, the hope is that cosmologists could then point the neural network at real observations and allow it to tell us what the universe is made of.

Approaches like these are becoming increasingly critical as modern astronomical observatory churn out massive amounts of data. The Vera C. Rubin Observatory, a state-of-the-art facility under construction in Chile, will be tasked with providing over 60 petabytes (with one petabyte equaling one thousand terabytes) of raw data in the form of high-resolution images of the sky. Parsing that much data is beyond the capabilities of even the most determined of graduate students. Only computers, aided by artificial intelligence, will be up to the task.

Of particular interest to that upcoming observatory will be the search for the unexpected. For example, the astronomer William Herschel discovered the planet Uranus by accident during a regular survey of the night sky. Artificial intelligence can be used to flag and report potentially interesting objects by identifying anything that doesn't fit an established pattern. And in fact, astronomers have already used AI to spot a potentially dangerous asteroid using an algorithm written specifically for the Vera C. Rubin observatory.

Who knows what future discoveries we will ultimately have to credit to a machine?

Go here to see the original:
AI is already helping astronomers make incredible discoveries ... - Space.com