The Vulnerability of AI Systems May Explain Why Russia Isn’t Using Them Extensively in Ukraine – Forbes
Output of an Artificial Intelligence system from Google Vision, performing Facial Recognition on a ... [+] photograph of a man in San Ramon, California, November 22, 2019. (Photo by Smith Collection/Gado/Getty Images)
The news that Ukraine is using facial recognition software to uncover Russian assailants and identify Ukrainians killed in the ongoing war is noteworthy largely because its one of few documented uses of artificial intelligence in the conflict. A Georgetown University think tank is trying to figure out why while advising U.S. policymakers of the risks of AI.
The CEO of the controversial American facial recognition company Clearview AI told Reuters that Ukraines defense ministry began using its imaging software Saturday after Clearview offered it for free. The reportedly powerful recognition tool relies on artificial intelligence algorithms and a massive quantity of image training data scraped from social media and the internet.
But aside from Russian influence campaigns with their much-discussed deep fakes and misinformation-spreading bots, the lack of known tactical use (at least publicly) of AI by the Russian military has surprised many observers. Andrew Lohn isnt one of them.
Lohn, a senior fellow with Georgetown Universitys Center for Security and Emerging Technology, works on its Cyber-AI Project, which is seeking to draw policymakers attention to the growing body of academic research showing that AI and machine-learning (ML) algorithms can be attacked in a variety of basic, readily exploitable ways.
We have perhaps the most aggressive cyber actor in the world in Russia who has twice turned off the power to Ukraine and used cyber-attacks in Georgia more than a decade ago. Most of us expected the digital domain to play a much larger role. Its been small so far, Lohn says.
We have a whole bunch of hypotheses [for limited AI use] but we dont have answers. Our program is trying to collect all the information we can from this encounter to figure out which are most likely.
They range from the potential effectiveness of Ukrainian cyber and counter-information operations, to an unexpected shortfall in Russian preparedness for digital warfare in Ukraine, to Russias need to preserve or simplify the digital operating environment for its own tactical reasons.
All probably play some role, Lohn believes, but just as crucial may be a dawning recognition of the limits and vulnerability of AI/ML. The willingness to deploy AI tools in combat is a confidence game.
Junk In, Junk Out
Artificial intelligence and machine learning require vast amounts of data, both for training and to interpret for alerts, insights or action. Even when AI/ML have access to an unimpeded base of data, they are only as good as the information and assumptions which underlie them. If for no other reason than natural variability, both can be significantly flawed. Whether AI/ML systems work as advertised is a huge question, Lohn acknowledges.
The tech community refers to unanticipated information as Out of Distribution data. AI/ML may perform at what is deemed to be an acceptable level in a laboratory or in otherwise controlled conditions, Lohn explains. Then when you throw it into the real world, some of what it experiences is different in some way. You dont know how well it will perform in those circumstances.
In circumstances where life, death and military objectives are at stake, having confidence in the performance of artificial intelligence in the face of disrupted, deceptive, often random data is a tough ask.
Lohn recently wrote a paper assessing the performance of AI/ML when such systems scoop in out of distribution data. While their performance doesnt fall off quite as quickly as he anticipated, he says that if they operate in an environment where theres a lot of conflicting data, theyre garbage.
He also points out that the accuracy rate of AI/ML is impressively high but compared to low expectations. For example, image classifiers can work at 94%, 98% or 99.9% accuracy. The numbers are striking until one considers that safety-critical systems like cars/airplanes/healthcare devices/weapons are typically certified out to 5 or 6 decimal points (99.999999%) accuracy.
Lohn says AI/ML systems may still be better than humans at some tasks but the AI/ML community has yet to figure out what accuracy standards to put in place for system components. Testing for AI systems is very challenging, he adds.
For a start, the artificial intelligence development community lacks a test culture similar to what has become so familiar for military aerospace, land, maritime, space or weapons systems; a kind of test-safety regime that holistically assesses the systems-of-systems that make up the above.
The absence of such a back end combined with specific conditions in Ukraine may go some distance to explain the limited application of AI/ML on the battlefield. Alongside it lies the very real vulnerability of AI/ML to the compromised information and active manipulation that adversaries already to seek to feed and to twist it.
Bad Data, Spoofed Data & Classical Hacks
Attacking AI/ML systems isnt hard. It doesnt even require access to their software or databases. Age-old deceptions like camouflage, subtle visual environment changes or randomized data can be enough to throw off artificial intelligence.
As a recent article in the Armed Forces Communications and Electronics Associations (AFCEA) magazine noted, researchers from Chinese e-commerce giant Tencent managed to get a Tesla sedans autopilot (self-driving) feature to switch lanes into oncoming traffic simply by using inconspicuous stickers on the roadway. McAfee Security researchers used similarly discreet stickers on speed limit signs to get a Tesla to speed up to 85 miles per hour in a 35 mile-an-hour zone.
An Israeli soldier is seen during a military exercise in the Israeli Arab village of Abu Gosh on ... [+] October 20, 2013 in Abu Gosh, Israel. (Photo by Lior Mizrahi/Getty Images)
Such deceptions have probably already been examined and used by militaries and other threat actors Lohn says but the AI/ML community is reluctant to openly discuss exploits that can warp its technology. The quirk of digital AI/ML systems is that their ability to sift quickly through vast data sets - from images to electromagnetic signals - is a feature that can be used against them.
Its like coming up with an optical illusion that tricks a human except with a machine you get to try it a million times within a second and then determine whats the best way to effect this optical trick, Lohn says.
The fact that AI/ML systems tend to be optimized to zero in on certain data to bolster their accuracy may also be problematic.
Were finding that [AI/ML] systems may be performing so well because theyre looking for features that are not resilient, Lohn explains. Humans have learned to not pay attention to things that arent reliable. Machines see something in the corner that gives them high accuracy, something humans miss or have chosen not to see. But its easy to trick.
The ability to spoof AI/ML from outside joins with the ability to attack its deployment pipeline. The supply chain databases on which AI/ML rely are often open public databases of images or software information libraries like GitHub.
Anyone can contribute to these big public databases in many instances, Lohn says. So there are avenues [to mislead AI] without even having to infiltrate.
The National Security Agency has recognized the potential of such data poisoning. In January, Neal Ziring, director of NSAs Cybersecurity Directorate, explained during a Billington CyberSecurity webinar that research into detecting data poisoning or other cyber attacks is not mature. Some attacks work by simply seeding specially crafted images into AI/ML training sets, which have been harvested from social media or other platforms.
According to Ziring, a doctored image can be indistinguishable to human eyes from a genuine image. Poisoned images typically contain data that can train the AI/ML to misidentify whole categories of items.
The mathematics of these systems, depending on what type of model youre using, can be very susceptible to shifts in the way recognition or classification is done, based on even a small number of training items, he explained.
Stanford cryptography professor Dan Boneh told AFCEA that one technique for crafting poisoned images is known as the fast gradient sign method (FGSM). The method identifies key data points in training images, leading an attacker to make targeted pixel-level changes called perturbations in an image. The modifications turn the image into an adversarial example, providing data inputs that make the AI/ML misidentify it by fooling the model being used. A single corrupt image in a training set can be enough to poison an algorithm, causing misidentification of thousands of images.
FGSM attacks are white box attacks, where the attacker has access to the source code of the AI/ML. They can be conducted on open-source AI/ML for which there are several publicly accessible repositories.
You typically want to try the AI a bunch of times and tweak your inputs so they yield the maximum wrong answer, Lohn says. Its easier to do if you have the AI itself and can [query] it. Thats a white box attack.
If you dont have that, you can design your own AI that does the same [task] and you can query that a million times. Youll still be pretty effective at [inducing] the wrong answers. Thats a black box attack. Its surprisingly effective.
Black box attacks where the attacker only has access to the AI/ML inputs, training data and outputs make it harder to generate a desired wrong answer. But theyre effective at producing random misinterpretation, creating chaos Lohn explains.
DARPA has taken up the problem of increasingly complex attacks on AI/ML that dont require inside access/knowledge of the systems being threatened. It recently launched a program called Guaranteeing AI Robustness against Deception (GARD), aimed at the development of theoretical foundations for defensible ML and the creation and testing of defensible systems.
More classical exploits wherein attackers seek to penetrate and manipulate the software and networks that AI/ML run on remain a concern. The tech firms and defense contractors crafting artificial intelligence systems for the military have themselves been targets of active hacking and espionage for years. While Lohn says there has been less reporting of algorithm and software manipulation, that would be potentially be doable as well.
It may be harder for an adversary to get in and change things without being noticed if the defender is careful but its still possible.
Since 2018, the Army Research Laboratory (ARL) along with research partners in the Internet of Battlefield Things Collaborative Research Alliance, looked at methods to harden the Armys machine learning algorithms and make them less susceptible to adversarial machine learning techniques. The collaborative developed a tool it calls Attribution-Based Confidence Metric for Deep Neural Networks in 2019 to provide a sort of quality assurance for applied AI/ML.
Despite the work, ARL scientist Brian Jalaian told its public affairs office that, While we had some success, we did not have an approach to detect the strongest state-of-the-art attacks such as [adversarial] patches that add noise to imagery, such that they lead to incorrect predictions.
If the U.S. AI/ML community is facing such problems, the Russians probably are too. Andrew Lohn acknowledges that there are few standards for AI/ML development, testing and performance, certainly nothing like the Cybersecurity Maturity Model Certification (CMMC) that DoD and others adopted nearly a decade ago.
Lohn and CSET are trying to communicate these issues to U.S. policymakers not to dissuade the deployment of AI/ML systems, Lohn stresses, but to make them aware of the limitations and operational risks (including ethical considerations) of employing artificial intelligence.
Thus far he says, policymakers are difficult to paint with a broad brush. Some of those Ive talked with are gung-ho, others are very reticent. I think theyre beginning to become more aware of the risks and concerns.
He also points out that the progress weve made in AI/ML over the last couple of decades may be slowing. In another recent paper he concluded that advances in the formulation of new algorithms have been overshadowed by advances in computational power which has been the driving force in AI/ML development.
Weve figured out how to string together more computers to do a [computational] run. For a variety of reasons, it looks like were basically at the edge of our ability to do that. We may already be experiencing a breakdown in progress.
Policymakers looking at Ukraine and at the world before Russias invasion were already asking about the reliability of AI/ML for defense applications, trying to gauge the level of confidence they should place in it. Lohn says hes basically been telling them the following;
Self driving cars can do some things that are pretty impressive. They also have giant limitations. A battlefield is different. If youre in a permissive environment with an application similar to existing commercial applications that have proven successful, then youre probably going to have good odds. If youre in a non-permissive environment, youre accepting a lot of risk.
The rest is here:
The Vulnerability of AI Systems May Explain Why Russia Isn't Using Them Extensively in Ukraine - Forbes
- The Biggest Risk to Your Artificial Intelligence (AI) Stocks Isn't AI Itself. It's $100+ Oil. - The Motley Fool - March 30th, 2026 [March 30th, 2026]
- If I Had $10,000 to Invest in Artificial Intelligence (AI) Right Now, I'd Split It Between These 3 Stocks - Yahoo Finance - March 30th, 2026 [March 30th, 2026]
- 3 Artificial Intelligence (AI) Stocks That Could Help Set You Up for Life - Yahoo Finance - March 30th, 2026 [March 30th, 2026]
- 3 Artificial Intelligence (AI) Stocks That Could Help Set You Up for Life - The Motley Fool - March 30th, 2026 [March 30th, 2026]
- Artificial Intelligence: Reality Versus Hype (Opinion) - Education Week - March 30th, 2026 [March 30th, 2026]
- Tech Days returns to UNM with focus on artificial intelligence, innovation - UNM Newsroom - March 30th, 2026 [March 30th, 2026]
- Artificial intelligence and climate migration equity - Nature - March 30th, 2026 [March 30th, 2026]
- Your Artificial Intelligence (AI) Portfolio Probably Looks Very Different Than It Did 6 Months Ago. Here's Why That's OK. - The Motley Fool - March 30th, 2026 [March 30th, 2026]
- The genies out of the bottle: Little signs artificial intelligence education bill - Idaho Education News - March 30th, 2026 [March 30th, 2026]
- This Artificial Intelligence (AI) Stock Could Handily Outperform Management's Own Guidance. Buy It Now. - The Motley Fool - March 30th, 2026 [March 30th, 2026]
- Artificial intelligence will see you now: Bots to prescribe mental health drugs - New York Post - March 30th, 2026 [March 30th, 2026]
- Artificial Intelligence in Defence Market: Size, Trends, Growth Drivers, and Future Outlook (2026 to 2035) - openPR.com - March 30th, 2026 [March 30th, 2026]
- Area educators address growing student reliance on artificial intelligence - 910news.com - March 30th, 2026 [March 30th, 2026]
- Harnessing Artificial Intelligence to Deliver Growth Mindset Education in a Pre-matriculation Curriculum for Incoming Medical Students - Cureus - March 30th, 2026 [March 30th, 2026]
- Artificial Intelligence in Obstetrics and Gynecology Nursing: Clinical, Educational, and Ethical Perspectives - Cureus - March 30th, 2026 [March 30th, 2026]
- Did Investors Get Too Far Ahead of the Artificial Intelligence (AI) Revolution? The Market Is Starting to Say Yes. - The Motley Fool - March 30th, 2026 [March 30th, 2026]
- 1 No-Brainer Artificial Intelligence (AI) Stock That Will Skyrocket By the End of 2026 - The Motley Fool - March 30th, 2026 [March 30th, 2026]
- Oil Over $100, a War in the Middle East, and the Fed on Hold. Here's How to Protect Your Artificial Intelligence (AI) Portfolio in 2026. - The Motley... - March 30th, 2026 [March 30th, 2026]
- Marvell's Data Center Revenue Just Grew 21%. Here's Why This Artificial Intelligence (AI) Stock Could Deliver 50% Upside in 2026. - The Motley Fool - March 30th, 2026 [March 30th, 2026]
- This Brilliant Artificial Intelligence (AI) Stock Just Unveiled Plans to Reach a $9 Trillion Valuation by 2031 (Hint: Not Nvidia) - The Motley Fool - March 30th, 2026 [March 30th, 2026]
- This Company Is Doubling Its Artificial Intelligence (AI) Spending in 2026. Here's Why It's a Long-Term Winner. - The Motley Fool - March 30th, 2026 [March 30th, 2026]
- 3 Artificial Intelligence (AI) Stocks to Buy at a Discount - The Motley Fool - March 30th, 2026 [March 30th, 2026]
- Findlay event highlights growing impact of Artificial Intelligence in EHS fields - Hometown Stations - March 30th, 2026 [March 30th, 2026]
- If I Had $10,000 to Invest in Artificial Intelligence (AI) Right Now, I'd Split It Between These 3 Stocks - The Motley Fool - March 30th, 2026 [March 30th, 2026]
- World Artificial Intelligence In Packaging - Market Analysis, Forecast, Size, Trends and Insights - indexbox.io - March 30th, 2026 [March 30th, 2026]
- How an Australian man used artificial intelligence to fight dog's cancer - business-standard.com - March 30th, 2026 [March 30th, 2026]
- Why Is Artificial Intelligence Technology Solutions Inc. (AITXD) Stock Up Today? - Meyka - March 30th, 2026 [March 30th, 2026]
- Tech Entrepreneur Yanik Guillemette Publishes Strategic Analysis on Artificial Intelligence and Its Impact on Business in North America - Yahoo... - March 30th, 2026 [March 30th, 2026]
- Marvell's Data Center Revenue Just Grew 21%. Here's Why This Artificial Intelligence (AI) Stock Could Deliver 50% Upside in 2026. - Yahoo Finance - March 30th, 2026 [March 30th, 2026]
- Three Charged With Conspiring To Unlawfully Divert U.S. Artificial Intelligence Technology To China - Department of Justice (.gov) - March 20th, 2026 [March 20th, 2026]
- NAIRR at 2 years: Advancing American artificial intelligence innovation and leadership - National Science Foundation (.gov) - March 20th, 2026 [March 20th, 2026]
- Prediction: These 3 Under-the-Radar Artificial Intelligence (AI) Stocks Could Be Multibaggers by End of 2026 - Yahoo Finance - March 20th, 2026 [March 20th, 2026]
- S&P Global: I Believe Artificial Intelligence Is A Blessing Rather Than Curse (NYSE:SPGI) - Seeking Alpha - March 20th, 2026 [March 20th, 2026]
- Generalist biological artificial intelligence in modeling the language of life - Nature - March 20th, 2026 [March 20th, 2026]
- Should You Really Buy Artificial Intelligence Stocks Right Now? Evidence is Piling Up and Here's What it Says. - The Motley Fool - March 20th, 2026 [March 20th, 2026]
- The Artificial Intelligence (AI) Stock That Could Redefine Its Industry by the End of 2026 - The Motley Fool - March 20th, 2026 [March 20th, 2026]
- Prediction: These 3 Under-the-Radar Artificial Intelligence (AI) Stocks Could Be Multibaggers by End of 2026 - The Motley Fool - March 20th, 2026 [March 20th, 2026]
- Reimagining Cancer Care: The Role of Artificial Intelligence in Clinical Delivery - Eliezer Van Allen - UroToday - March 20th, 2026 [March 20th, 2026]
- Artificial Intelligence, Social Responsibility, and the Future of the Judiciary - Modern Diplomacy - March 20th, 2026 [March 20th, 2026]
- Artificial Intelligence in Undergraduate Medical Education: A Cross-Sectional Study of Utilization Patterns and Perceptions Among Medical Students -... - March 20th, 2026 [March 20th, 2026]
- Forum on harnessing Artificial Intelligence for health equity - World Health Organization (WHO) - March 20th, 2026 [March 20th, 2026]
- Artificial Intelligence Reshapes Consulting Industry as PwC Signals Major Shift in Strategy - Meyka - March 20th, 2026 [March 20th, 2026]
- Navigating the cybersecurity challenges of artificial intelligence in medicine - KevinMD.com - March 20th, 2026 [March 20th, 2026]
- 2 Artificial Intelligence (AI) Stocks With Millionaire-Making Potential That Wall Street Is Overlooking - The Motley Fool - March 20th, 2026 [March 20th, 2026]
- Prediction: This Artificial Intelligence (AI) Stock Will Be the Surprise Winner of the Software Sell-Off in 2026 - The Motley Fool - March 20th, 2026 [March 20th, 2026]
- Artificial intelligence in inflammatory bowel disease: bridging innovation, implementation and impact - Nature - March 20th, 2026 [March 20th, 2026]
- INDRA AND SYNAPTIC AVIATION ENHANCE EFFICIENCY WITH ARTIFICIAL INTELLIGENCE AT AIRPORTS - Breaking Travel News - March 20th, 2026 [March 20th, 2026]
- "Songs without a soul? Singer JULIK sharply criticized artificial intelligence in music - () - March 20th, 2026 [March 20th, 2026]
- EducAItion: Implementing artificial intelligence in the classroom - Trinitonian - March 20th, 2026 [March 20th, 2026]
- 3 men are charged with conspiring to smuggle US artificial intelligence to China - Temple Daily Telegram - March 20th, 2026 [March 20th, 2026]
- AI policy group makes recommendations on implementing states first-of-its-kind 2024 artificial intelligence law - The Durango Herald - March 20th, 2026 [March 20th, 2026]
- A $450 Billion Opportunity: Is This Physical Artificial Intelligence (AI) Stock a Buy Right Now? - The Motley Fool - March 20th, 2026 [March 20th, 2026]
- The 1 Artificial Intelligence Stock I'd Buy With My Last $500 - The Motley Fool - March 20th, 2026 [March 20th, 2026]
- 3 men are charged with conspiring to smuggle US artificial intelligence to China - Meadville Tribune - March 20th, 2026 [March 20th, 2026]
- 3 men are charged with conspiring to smuggle US artificial intelligence to China - The Independent - March 20th, 2026 [March 20th, 2026]
- 3 men are charged with conspiring to smuggle US artificial intelligence to China - marketscreener.com - March 20th, 2026 [March 20th, 2026]
- Should You Forget CoreWeave and Buy 3 Artificial Intelligence (AI) Stocks Right Now? - The Motley Fool - March 18th, 2026 [March 18th, 2026]
- Companies Say the Risks of Open Artificial Intelligence Models Are Worth It - WSJ - March 18th, 2026 [March 18th, 2026]
- Artificial Intelligence Reshaping the Cell and Gene Therapy Manufacturing Landscape, New BCC Research Report Finds - Yahoo Finance Singapore - March 18th, 2026 [March 18th, 2026]
- AI policy group makes recommendations on implementing states first-of-its-kind 2024 artificial intelligence law - Colorado Public Radio - March 18th, 2026 [March 18th, 2026]
- The next phase of artificial intelligence may require very different processors - The Economist - March 18th, 2026 [March 18th, 2026]
- The Iran war disrupts global helium supply and artificial intelligence chipmakers - Scientific American - March 18th, 2026 [March 18th, 2026]
- Military Operational Thinking in an Age of Artificial Intelligence - War on the Rocks - March 18th, 2026 [March 18th, 2026]
- AI policy group makes recommendations on tweaks to Colorado's first-of-its-kind artificial intelligence law - The Colorado Sun - March 18th, 2026 [March 18th, 2026]
- Antiquity vs. artificial intelligence: a war over information has begun - InForum - March 18th, 2026 [March 18th, 2026]
- Artificial intelligence-guided design of LNPs for in vivo targeted mRNA delivery via analysis of the spatial conformation of ionizable lipids - Nature - March 18th, 2026 [March 18th, 2026]
- Augmented reality and artificial intelligence for ultrasound scans - European Space Agency - March 18th, 2026 [March 18th, 2026]
- 1 Can't-Miss Artificial Intelligence (AI) Stock to Buy With $100 Right Now - The Motley Fool - March 18th, 2026 [March 18th, 2026]
- New Bachelors Degree in Artificial Intelligence Prepares Students for the Future of AI Innovation - University of New Haven - March 18th, 2026 [March 18th, 2026]
- Nanodiamonds and beyond: designing carbon materials with artificial intelligence at exascale - anl.gov - March 18th, 2026 [March 18th, 2026]
- Reply announces a partnership with Mistral AI to develop sovereign and enterprise-grade artificial intelligence solutions - PR Newswire - March 18th, 2026 [March 18th, 2026]
- What the data says about Americans views of artificial intelligence - Pew Research Center - March 18th, 2026 [March 18th, 2026]
- [COLUMN] Luxury wont be overwhelmed by artificial intelligence. Its in the process of harnessing it - Luxus Plus - March 18th, 2026 [March 18th, 2026]
- From artificial intelligence to zoos, here's what happened to some of the key bills in Richmond - Cardinal News - March 18th, 2026 [March 18th, 2026]
- Artificial Intelligence Investing: How to Avoid the Big 7 Trap - finews.com - March 18th, 2026 [March 18th, 2026]
- 1 Artificial Intelligence (AI) Stock That Could Surprise Investors in 2026 - The Motley Fool - March 18th, 2026 [March 18th, 2026]
- Mich. Police Department Turns to Artificial Intelligence-Powered Software to Collect Speeding Data - officer.com - March 18th, 2026 [March 18th, 2026]
- Brazilian CFM Issues Resolution on the Use of Artificial Intelligence in Medicine - Mayer Brown - March 18th, 2026 [March 18th, 2026]
- Michelin Launches TreadVision by Michelin Retread Technologies, Advancing Automation and Artificial Intelligence in Commercial Retreading - Michelin... - March 18th, 2026 [March 18th, 2026]
- New Institute at University focuses on Applied Artificial Intelligence - University of Huddersfield - March 18th, 2026 [March 18th, 2026]