Archive for the ‘Artificial Intelligence’ Category

Art direction vs artificial intelligence: A helpful tool or an added hassle? – It’s Nice That

What Base was developing at this stage was a series of visual imaginings in relation to the opera being created. Developed in tandem with the opera house, the Base team would input parts as they received them into Dall-E, which was chosen as it provided the best results in terms of its weirdness and imperfect generation, adds Arthur Dubois, Bases motion designer. Arthur then used Google Colab to inject life into still images through animation, also working with Tokkin Heads, an AI that generates characters and the facial features to apply a templated animation, or just record with your webcam to create an animation of the face, he says. Runway was then also used, due to its framing interpolation, which generates in-between frames for a type-to-image transition. The final AI tool the team employed was Upscayl, used to upscale every image generated to principal resolution.

Although these steps appear simple when noted down, the actual process of working with AI in a design context wasnt without its challenges. Firstly, the imagery generated by Dall-E tested the creative limits of an identity system such as this. It really allowed for an aesthetic that felt less corporate or commercial, says Bruce Vansteenwinkel, a designer at Base. But at times these images werent appealing from an aesthetic standpoint. Thankfully, the animation techniques used really helped us to increase the weirdness of each visual, adds Aurore. Even more so when we werent really fond of an image, adding motion could help bring the result we wanted.

The process of then delivering the work also proved harder than originally anticipated. When it came to collaborating with La Monnaies team we realised that it was a bit more hassle, explains Bruce. We sold it as a toolbox. A way of collaborating where we could create images, they could create images, but that creates a lot of opportunities for new misunderstandings and new frustrations. Things became out of hand.

However, losing control led Base to realise the necessity of human-led design when collaborating with artificially generated artworks. Its funny. We sold them a concept of losing control, but then we lost so much control we had to gain it back, Bruce says. Right before launching there was an internal crisis because some of the images wed developed were less striking than the originals we had presented. That was the concept, but because were designers neat freaks, control freaks we wanted to regain control of the image. The delight we had in the beginning turned into despair, and maybe even disappointment.

The final visual not representing an art director or designers initial vision is a possibility on any project. In the end, Bases team placed themselves back in their usual role in a brief such as this, inputting details like the aforementioned animation techniques to tie the campaign together. We were a bit naive, says Aurore. A tool where both we can create images, and the client can create images sounds wonderful, but looking at what theyd done we realised we were a very important part of the process. We have a culture of image. They have a culture of content. We understood that we were still needed, which is the whole question around AI.

Like Sebas communication approach, since completing the La Monnaie project Base has been extremely open, and proud, of this use case of AI as art direction and design. As pointed out by Manon: Its our duty to use these tools and see where they can bring us to. There has also been little backlash towards the agency, even though its team have used it so obviously.

Arguably this is due to the fact that AI was only implemented due to its relevance for the overall theme of the opera season at hand, rather than an excuse not to work, or have AI as a buzzword, says Bruce. On this project I received a lot of questions about whether AI is going to replace our jobs. In our personal experience, I think in many ways it can. But thats more of a choice than an actual fate you have to accept. There are a few agencies working towards owning AI as their unique selling point, but Im not sure whether thats the strongest way to move forward with design at large. Choosing when and how to use it, maybe a little sparingly as well, evades the question of whether it will take our jobs. Interestingly, since completing the project Base hasnt used AI tools to this extent again, because, with this experience in mind, the concept hasnt called for it.

The rest is here:
Art direction vs artificial intelligence: A helpful tool or an added hassle? - It's Nice That

Athens Democracy Forum: Are Artificial Intelligence and Democracy Compatible? – The New York Times

This article is from a special report on the Athens Democracy Forum, which gathered experts last week in the Greek capital to discuss global issues.

Moderator: Liz Alderman, chief European business correspondent, The New York Times

Speaker: Nick Clegg, president, global affairs, Meta

Excerpts from the Rethinking A.I. and Democracy discussion have been edited and condensed.

LIZ ALDERMAN A.I. obviously holds enormous promise and can do all kinds of new things. A.I. can even help us possibly solve some of our hardest problems. But it also comes with risks, including manipulation, disinformation and the existential threat of it being used by bad actors. So Nick, why should the public trust that A.I. will be a boon to democracy, rather than a potential threat against it?

NICK CLEGG I think the public should continue to reserve judgment until we see how things play out. And I think, like any major technological innovation, technology can be used for good and for bad purposes, can be used by good and bad people. Thats been the case from the invention of the car to the internet, from the radio to the bicycle. And I think its natural to fear the worst, to try and anticipate the worst, and to be fearful particularly of technologies which are difficult to comprehend. So I think its not surprising that in recent months, certainly since ChatGPT produced its large language model, a lot of the focus has centered on possible risks. I think some of those risks, or at least the way some of them are being described, are running really quite far ahead of the technology, to be candid. You know, this idea of A.I.s developing a kind of autonomy and an agency of their own, a sort of demonic wish to destroy humanity and turn us all into paper clips and so on, which was quite a lot of the sort of early discussion.

ALDERMAN We havent reached Terminator 2 status.

CLEGG Yeah, exactly. Because these are systems, remember, which dont know anything. They dont have any real meaningful agency or autonomy. They are extremely powerful and sophisticated ways of slicing and dicing vast amounts of data and applying billions of parameters to it to recognize patterns across a dizzying array of data sets and data points.

Continue reading here:
Athens Democracy Forum: Are Artificial Intelligence and Democracy Compatible? - The New York Times

Google packs more artificial intelligence into new Pixel phones, raises prices for devices by $100 – Tech Xplore

  1. Google packs more artificial intelligence into new Pixel phones, raises prices for devices by $100  Tech Xplore
  2. Google Pixel 8 Pro hands-on preview: artificial intelligence is genuinely cool  PhoneArena
  3. Google launches Pixel 8, smartwatch with new AI feature  Reuters

Follow this link:
Google packs more artificial intelligence into new Pixel phones, raises prices for devices by $100 - Tech Xplore

Autonomous artificial intelligence increases real-world specialist … – Nature.com

Theoretical foundation of unbiased estimation of healthcare productivity

To test our central hypothesisthat autonomous AI improves healthcare system productivityin an unbiased manner, we developed a healthcare productivity model based on rational queueing theory30, as widely used in the healthcare operations management literature31. A healthcare provider system, which can be a hospital, an individual physician providing a service, an autonomous AI providing a service at a performance level at least or higher than a human expert, a combination thereof, or a national healthcare system, are all modeled as an overloaded queue, facing a potential demand that is greater than its capacity; that is, , where denotes the total demand on the system - patients seeking careand denotes the maximum number of patients the system can serve per unit of time. We define system productivity as

$$lambda =frac{{n}_{q}}{t},$$

(1)

where nq is the number of patients who completed a care encounter with a quality of care that was non-inferior to q, and t is the length of time over which nq was measured, allowing for systems that include autonomous AI in some fashion. While the standard definitions of healthcare labor productivity, such as in Camasso et al.7, ignore quality of care, q assumes quality of care non-inferior to the case when care is provided by a human expert, such as a retina specialist, to address potential concerns about the safety of healthcare AI8: Our definition of , as represented by Eq. (1), guarantees that quality of care is either maintained or improved.

denotes the proportion of patients who receive and complete the care encounter in a steady state, where the average number of patients who successfully complete the care encounter is equal to the average number of patients who gain access to care, per unit of time, in other words, =. See Fig. 3. Remember that in the overloaded queue model, there are many patients 1- who do not gain access. is agnostic about the specific manner in which access is determined: may take the form of a hospital administrator who establishes a maximum number of patients admitted to the system or in the form of barriers to caresuch as an inability to pay, travel long distances, take time off work or other sources of health inequitieslimiting a patient gaining access to the system. As mentioned, is agnostic on whether the care encounter is performed and completed by an autonomous AI, human providers, or a combination thereof, as from the patient perspective, we measure the number of patients that complete the appropriate level of care per unit time at a performance level at least or higher than human physician. Not every patient will be eligible to start their encounter with autonomous AI, and we denote by , 0<<1 the proportion of eligible patients, for example, because they do not fit the inclusion criteria for the autonomous AI; not every patient will be able to complete their care encounter with autonomous AI when the autonomous AI diagnosed them with disease requiring a human specialist, and we denote by , 0<<1, the proportion of patients who started their care encounter with AI, and still required a human provider to complete their encounter. The proportion (1-) are diagnosed as disease absent and start and complete their encounter with autonomous AI only, without needing to see a human provider. For all permutations, productivity is measured as the number of patients who complete a provided care encounter per unit of time, with C, the productivity associated with the control group, where the screening result of the AI system is not used to determine the rest of the care process, and AI, the productivity associated with the intervention group, where the screening result of the AI system is used to determine the rest of the care process, and where the AI performance is at least as high as the human provider.

a Mathematical model of overloaded queue healthcare system in order to estimate productivity as = . without observer bias. b Model of overloaded queue healthcare system where autonomous AI is added to the workflow.

Because an autonomous AI that completes the care process for patients without diseasetypically less complex patientsas in the present study, will result in relatively more complex patients to be seen by the human specialist, we calculate complexity-adjusted specialist productivity as

$${lambda }_{{ca}}=frac{{bar{c}n}_{q}}{t},$$

(2)

with (bar{c}) the average complexity, as determined with an appropriate method, for all n patients that complete the care encounter with that specialist. This definition of ca, as represented by Eq. (2), corrects for a potentially underestimated productivity because the human specialist sees more clinically complex patients requiring more time than without the AI changing the patient mix.

We focus on the implication ; in other words, that system capacity is limited relative to potential demand, as that is the only way in which c and AI, can be measured without recruitment bias, i.e., in a context where patients arrive throughout the day without appointment or other filter, as is the case in Emergency Departments in the US, and almost all clinics in low- and middle-income countries (LMICs). This is not the case, however, in contexts where most patient visits are scheduled, and thus cannot be changed dynamically, and measuring in such a context would lead to bias. Thus, we selected a clinic with a very large demand (), Deep Eye Care Foundation (DECF) in Bangladesh, for the trial setting in order to avoid recruitment bias.

The B-PRODUCTIVE (Bangladesh-PRODUCTIVity in Eyecare) study was a preregistered, prospective, double-masked, cluster-randomized clinical trial performed in retina specialist clinics at DECF, a not-for-profit, non-governmental hospital in Rangpur, Bangladesh, between March 20 and July 31, 2022. The clusters were specialist clinic days, and all clinic days were eligible during the study period. Patients are not scheduled; there are no pre-scheduled patient visit times or time slots, instead access to a specialist clinic visit is determined by clinic staff on the basis of observed congestion, as explained in the previous Section.

The study protocol was approved by the ethics committees at the Asian Institute of Disability and Development (Dhaka, Bangladesh; # Southasia-hrec-2021-4-02), the Bangladesh Medical Research Council (Dhaka, Bangladesh; # 475 27 02 2022) and Queens University Belfast (Belfast, UK; # MHLS 21_46). The tenets of the Declaration of Helsinki were adhered to throughout, and the trial was preregistered with ClinicalTrials.gov, #NCT05182580, before the first participant was enrolled. The present study included local researchers throughout the research process, including design, local ethics review, implementation, data ownership and authorship to ensure it was collaborative and locally relevant.

The autonomous AI system (LumineticsCore (formerly IDx-DR), Digital Diagnostics, Coralville, Iowa, USA) was designed, developed, previously validated and implemented under an ethical framework to ensure compliance with the principles of patient benefit, justice and autonomy, and avoid Ethics Dumping13. It diagnoses specific levels of diabetic retinopathy and diabetic macular edema (Early Treatment of Diabetic Retinopathy Study level 35 and higher), clinically significant macular edema, and/or center-involved macular edema32, referred to as referable Diabetic Eye Disease (DED)33, that require management or treatment by an ophthalmologist or retina specialist, for care to be appropriate. If the ETDRS level is 20 or lower and no macular edema is present, appropriate care is to retest in 12 months34. The AI system is autonomous in that the medical diagnosis is made solely by the system without human oversight. Its safety, efficacy, and lack of racial, ethnic and sex bias were validated in a pivotal trial in a representative sample of adults with diabetes at risk for DED, using a workflow and minimally trained operators comparable to the current study13. This led to US FDA De Novo authorization (FDA approval) in 2018 and national reimbursement in 202113,15.

The autonomous AI system was installed by DECF hospital information technology staff on March 2, 2022, with remote assistance from the manufacturer. Autonomous AI operators completed a self-paced online training module on basic fundus image-capture and camera operations (Topcon NW400, Tokyo, Japan), followed by remote hands-on training on the operation by representatives of the manufacturers. Deployment was performed locally, without the physical presence of the manufacturers, and all training and support were provided remotely.

Typically, pharmacologic pupillary dilation is provided only as needed during use of the autonomous AI system. For the current study, all patient participants received pharmacologic dilation with a single drop each of tropicamide 0.8% and phenylephrine 5%, repeated after 15min if a pupil size of 4mm was not achieved. This was done to facilitate indirect ophthalmoscopy by the specialist participants as required. The autonomous AI system guided the operator to acquire two color fundus images determined to be of adequate quality using an image quality assessment algorithm, one each centered on the fovea and the optic nerve, and directed the operator to retake any images of insufficient quality. This process took approximately 10min, after which the autonomous AI system reported one of the following within 60s: DED present, refer to specialist, DED not present, test again in 12 months, or insufficient image quality. The latter response occurred when the operator was unable to obtain images of adequate quality after three attempts.

This study included both physician participants and patient participants. Physician participants were retina specialists who gave written informed consent prior to enrollment. For specialist participants, the inclusion criteria were:

Completed vitreoretinal fellowship training;

Examined 20 patients per week with diabetes and no known DED over the prior three months;

Performed laser retinal treatments or intravitreal injections on at least three DED patients per month over the same time period.

Exclusion criteria were:

AI-eligible patients are clinic patients meeting the following criteria:

Presenting to DECF for eye care;

Age 22 years. While preregistration stated participants could be aged 18 years, the US FDA De Novo clearance for the autonomous AI limits eligibility to 22 years;

Diagnosis of type 1 or type 2 diabetes prior to or on the day of recruitment;

Best corrected visual acuity 6/18 in the better-seeing eye;

No prior diagnosis of DED;

No history of any laser or incisional surgery of the retina or injections into either eye;

No medical contraindication to fundus imaging with dilation of the pupil12.

Exclusion criteria were:

Inability to provide informed consent or understand the study;

Persistent vision loss, blurred vision or floaters;

Previously diagnosed with diabetic retinopathy or diabetic macular edema;

History of laser treatment of the retina or injections into either eye or any history of retinal surgery;

Contraindicated for imaging by fundus imaging systems.

Patient participants were AI-eligible patients who gave written informed consent prior to enrollment. All eligibility criteria remained unchanged over the duration of the trial.

B-PRODUCTIVE was a concealed cluster-randomized trial in which a block randomization scheme by clinic date was generated by the study statistician (JP) on a monthly basis, taking into account holidays and scheduled clinic closures. The random allocation of each cluster (clinic day) was concealed until clinic staff received an email with this information just before the start of that days clinic, and they had no contact with the specialists during trial operations. Medical staff who determined access, specialists and patient participants remained masked to the random assignment of clinic days as control or intervention.

After giving informed consent, patient participants provided demographic, income, educational and clinical data to study staff using an orally administered survey in Bangla, the local language. Patients who were eligible but did not consent underwent the same clinical process without completing an autonomous AI diagnosis or survey. All patient participants, both intervention and control, completed the autonomous AI diagnostic process as described in the Autonomous AI implementation and workflow section above: the difference between intervention and control groups was that in the intervention group, the diagnostic AI output determined what happened to the patient next. In the control group, patient participants always went on to complete a specialist clinic visit after autonomous AI, irrespective of its output. In the intervention group, patient participants with an autonomous AI diagnostic report of DED absent, return in 12 months completed their care encounters without seeing a specialist and were recommended to make an appointment for a general eye exam in three months as a precautionary measure for the trial, minimizing the potential for disease progression (standard recall would be 12 months).

In the intervention group, patient participants with a diagnostic report of DED present or image quality insufficient completed their care encounters by seeing the specialist for further management. Seeing the specialist for not-consented, control group, and DED present / insufficient patient participants involved tonometry, anterior and posterior segment biomicroscopy, indirect ophthalmoscopy, and any further examinations and ancillary testing deemed appropriate by the specialist. After the patient participant completed the autonomous AI process, a survey with a 4-point Likert scale (very satisfied, satisfied, dissatisfied, very dissatisfied) was administered concerning the participants satisfaction with interactions with the healthcare team, time to receive examination results, and receiving their diagnosis from the autonomous AI system.

The primary outcome was clinic productivity for diabetes patients (d), measured as the number of completed care encounters per hour per specialist for control / non-AI (d,C) and intervention / AI (d,AI) days. d,C used the number of completed specialist encounters; d,AI used the number of eligible patients in the intervention group who completed an autonomous AI care encounter with a diagnostic output of DED absent, plus the number of encounters that involved the specialist exam. For the purposes of calculating the primary outcome, all diabetes patients who presented to the specialty clinic on study days were counted, including those who were not patient participants or did not receive the autonomous AI examination.

One of the secondary outcomes from this study was for all patients (patients both with and without diabetes) measured as the number of completed care encounters per hour per specialist by counting all patients presenting to the DECF specialty clinic on study days, including those without diabetes, for control (C) and intervention days (AI). Complexity-adjusted specialist productivity ca was calculated for intervention and control arms by multiplying (d,C) and (d,AI) by the average patient complexity (bar{c}).

During each clinic day, the study personnel recorded the day of the week and the number of hours that each specialist participant spent in the clinic, starting with the first consultation in the morning and ending when the examination of the last patient of the day was completed, including any time spent ordering and reviewing diagnostic tests and scheduling future treatments. Any work breaks, time spent on performing procedures, and other duties performed outside of the clinic were excluded. Study personnel obtained the number of completed clinic visits from the DECF patient information system after each clinic day.

At baseline, specialist participants provided information on demographic characteristics, years in specialty practice and patient volume. They also completed a questionnaire at the end of the study, indicating their agreement (5-point Likert scale, strongly agree to strongly disagree) with the following statements regarding autonomous AI: (1) saves time in clinics, (2) allows time to be focused on patients requiring specialist care, (3) increases the number of procedures and surgeries, and (4) improves DED screening.

Other secondary outcomes were (1) patient satisfaction; (2) number of DED treatments scheduled per day; and (3) complexity of patient participants. Patient and provider willingness to pay for AI was a preregistered outcome, but upon further review by the Bangladesh Medical Research Council, these data were removed based on their recommendation. The complexity score for each patient was calculated by a masked United Kingdom National Health Service grader using the International Grading system (a level 4 reference standard24), adapted from Wilkinson et al. International Clinical Diabetic Retinopathy and Diabetic Macular Edema Severity Scales31 (no DED=0 points, mild non-proliferative DED=0 points, moderate or severe non-proliferative DED=1 point, proliferative DED=3 points and diabetic macular edema=2 points.) The complexity score was summed across both eyes.

The null hypothesis was that the primary outcome parameter d, would not differ significantly between the study groups. The intra-cluster correlation coefficient (ICC) between patients within a particular cluster (clinic day) was estimated at 0.15, based on pilot data from the clinic. At 80% power, a two-sided alpha of 5%, a cluster size of eight patients per clinic day, and a control group estimated mean of 1.34 specialist clinic visits per hour (based on clinic data from January to March 2021), a sample size of 924 patients with completed clinically-appropriate retina care encounters (462 in each of the two study groups) was sufficient to detect a between-group difference of 0.34 completed care encounters per hour per specialist (equivalent to a 25% increase in productivity d,AI), with autonomous AI.

Study data were entered into Microsoft Excel 365 (Redmond, WA, USA) by the operators and the research coordinator in DECF. Data entry errors were corrected by the Orbis program manager in the US (NW), who remained masked to study group assignment.

Frequencies and percentages were used to describe patient participant characteristics for the two study groups. Age as a continuous variable was summarized with the mean and standard deviation. The number of treatments and complexity score were compared with the Wilcoxon rank sum test since they were not normally distributed. The primary outcome was normally distributed and compared between study groups using a two-sided Students t-test, and 95% confidence intervals around these estimates were calculated.

The robustness of the primary outcome was tested by utilizing linear regression modeling with generalized estimating equations that included clustering effects of clinic days. The adjustment for clustering of days since the beginning of the trial utilized an autoregressive first-order covariance structure since days closer together were expected to be more highly correlated. Residuals were assessed to confirm that a linear model fit the rate outcome. Associations between the outcome and potential confounders of patient age, sex, education, income, complexity score, clinic day of the week, and autonomous AI output were assessed. A sensitivity analysis with multivariable modeling included patient age and sex, and variables with p-values<0.10 in the univariate analysis. All statistical analyses were performed using SAS version 9.4 (Cary, North Carolina).

Read more from the original source:
Autonomous artificial intelligence increases real-world specialist ... - Nature.com

Artificial intelligence in veterinary medicine: What are the ethical and … – American Veterinary Medical Association

Artificial intelligence (AI) and machine learning, a type of AI that includes deep learning, which produces data with multiple levels of abstraction, are emerging technologies that have the potential to change how veterinary medicine is practiced. They have been developed to help improve predictive analytics and diagnostic performance, thus supporting decision-making when practitioners analyze medical images. But unlike human medicine, no premarket screening of AI tools is required for veterinary medicine.

This raises important ethical and legal considerations, particularly when it comes to conditions with a poor prognosis where such interpretations may lead to a decision to euthanize, and makes it even more vital for the veterinary profession to develop best practices to protect care teams, patients, and clients.

That's according to Dr. Eli Cohen, a clinical professor of diagnostic imaging at the North Carolina State College of Veterinary Medicine. He, presented the webinar, "Do No Harm: Ethical and Legal Implications of A.I.," which debuted in late August on AVMA Axon, AVMA's digital education platform.

During the presentation, he explored the potential of AI to increase efficiency and accuracy throughout radiology, but also acknowledged its biases and risks.

The use of AI in clinical diagnostic imaging practice will continue to grow, largely because much of the dataradiographs, ultrasound, CT, MRI, and nuclear medicineand their corresponding reports are in digital form, according to a Currents in One Health paper published in JAVMA in May 2022.

Dr. Ryan Appleby, assistant professor at the University of Guelph Ontario Veterinary College, who authored the paper, said artificial intelligence can be a great help in expediting tasks.

For example, AI can be used to automatically rotate or position digital radiographs, produce hanging protocolswhich are instructions for how to arrange images for optimal viewingor call up report templates based on the body parts included in the study.

More generally, AI can triage workflows by taking a first pass at various imaging studies and prioritize more critical patients to the top of the queue, said Dr. Appleby, who is chair of the American College of Veterinary Radiology's (ACVR) Artificial Intelligence Committee.

That said, when it comes to interpreting radiographs, not only does AI need to identify common cases of a disease, but it must also bring up border cases as well to ensure patients are being treated accurately and for it to be useful.

"As a specialist, I'm there for the subset of times when there is something unusual," Dr. Cohen said, who is co-owner of Dragonfly Imaging, a teleradiology company, where he serves as a radiologist. "While AI will get better, it's not perfect. We need to be able to troubleshoot it when it doesn't perform appropriately."

Medical device developers must gain Food and Drug Administration (FDA) approval for their devices and permission to sell their product in the U.S. Artificial intelligence and machine learning-enabled medical devices for humans are classified by the FDA as medical devices.

However, companies developing medical devices for animals are not required to undergo a premarket screening, unlike those developing devices for people. The ACVR has expressed concern about the lack of oversight for software used to read radiographs.

"It is logical that if the FDA provides guidelines and oversight of medical devices used on people, that similar measures should be in place for veterinary medical devices to help protect our pets," said Dr. Tod Drost, executive director of the American College of Veterinary Radiology. "The goal is not to stifle innovation, but rather have a neutral third party to provide checks and balances to the development of these new technologies."

Massive amounts of data are needed to train machine-learning algorithms and training images must be annotated manually. Because of the lack of regulation for AI developers and companies, it's not a requirement for companies to provide information about how their employees trained or validated their products. Many of these algorithms are often referred to as operating in a "black box."

"That raises pretty relevant ethical considerations if we're using these to make diagnoses and perform treatments," Dr. Cohen said.

Because AI doesn't have a conscience, he said, those who are developing and using AI need to have a conscience and can't afford to be indifferent. "AI might be smart, but that doesn't mean it's ethical," he said.

In the case of black-box medicine, "there exists no expert who can provide practitioners with useful causal or mechanistic explanations of the systems' internal decision procedures," according to a study published July 14, 2022, in Frontiers.

Dr. Cohen says, "As we adopt AI and bring it into veterinary medicine in a prudent and intentional way, the new best practice ideally would be leveraging human expertise and AI together as opposed to replacing humans with AI."

He suggested having a domain expert involved in all stages of AIfrom product development, validation, and testing to clinical use, error assessment, and oversight of these products.

The consensus of multiple leading radiology societies, including the American College of Radiology and Society for Imaging Informatics in Medicine, is that ethical use of AI in radiology should promote well-being and minimize harm.

"It is important that veterinary professionals take an active role in making medicine safer as use of artificial intelligence becomes more common. Veterinarians will hopefully learn the strengths and weaknesses of this new diagnostic tool by reviewing current literature and attending continuing education presentations," Dr. Appleby said.

Dr. Cohen recommends veterinarians obtain owner consent before using AI in decision making, particularly if the case involves a consult or referral. And during the decision-making process, practitioners should be vigilant about AI providing a diagnosis that exacerbates human and cognitive biases.

"We need to be very sure that when we choose to make that decision, that it is as validated and indicated as possible," Dr. Cohen said.

According to a 2022 Veterinary Radiology & Ultrasound article written by Dr. Cohen, if not carefully overseen, AI has the potential to cause harm. For example, an AI product could produce a falsepositive diagnosis, leading to tests or interventions, or lead to falsenegative results, possibly delaying diagnosis and care. It could also be applied to inappropriate datasets or populations, such as applying an algorithm to an ultrasound on a horse that gathered information from small animal cases.

He added that veterinary professionals need to consider if it is ethical to shift responsibility to general practitioners, emergency veterinarians, or non-imaging specialists who use a product whose accuracy is not published or otherwise known.

"How do we make sure there is appropriate oversight to protect our colleagues, our patients, and our clients, and make sure we're not asleep at the wheel as we usher in this new tech and adopt it responsibly?" Dr. Cohen asked.

See the article here:
Artificial intelligence in veterinary medicine: What are the ethical and ... - American Veterinary Medical Association