Machine learning security needs new perspectives and incentives – TechTalks
At this years International Conference on Learning Representations (ICLR), a team of researchers from the University of Maryland presented an attack technique meant to slow down deep learning models that have been optimized for fast and sensitive operations. The attack, aptly named DeepSloth, targets adaptive deep neural networks, a range of deep learning architectures that cut down computations to speed up processing.
Recent years have seen growing interest in the security of machine learning and deep learning, and there are numerous papers and techniques on hacking and defending neural networks. But one thing made DeepSloth particularly interesting: The researchers at the University of Maryland were presenting a vulnerability in a technique they themselves had developed two years earlier.
In some ways, the story of DeepSloth illustrates the challenges that the machine learning community faces. On the one hand, many researchers and developers are racing to make deep learning available to different applications. On the other hand, their innovations cause new challenges of their own. And they need to actively seek out and address those challenges before they cause irreparable damage.
One of the biggest hurdles of deep learning the computational costs of training and running deep neural networks. Many deep learning models require huge amounts of memory and processing power, and therefore they can only run on servers that have abundant resources. This makes them unusable for applications that require all computations and data to remain on edge devices or need real-time inference and cant afford the delay caused by sending their data to a cloud server.
In the past few years, machine learning researchers have developed several techniques to make neural networks less costly. One range of optimization techniques called multi-exit architecture stops computations when a neural network reaches acceptable accuracy. Experiments show that for many inputs, you dont need to go through every layer of the neural network to reach a conclusive decision. Multi-exit neural networks save computation resources and bypass the calculations of the remaining layers when they become confident about their results.
In 2019, Yigitan Kaya, a Ph.D. student in Computer Science at the University of Maryland, developed a multi-exit technique called shallow-deep network, which could reduce the average inference cost of deep neural networks by up to 50 percent. Shallow-deep networks address the problem of overthinking, where deep neural networks start to perform unneeded computations that result in wasteful energy consumption and degrade the models performance. The shallow-deep network was accepted at the 2019 International Conference on Machine Learning (ICML).
Early-exit models are a relatively new concept, but there is a growing interest, Tudor Dumitras, Kayas research advisor and associate professor at the University of Maryland, told TechTalks. This is because deep learning models are getting more and more expensive computationally, and researchers look for ways to make them more efficient.
Dumitras has a background in cybersecurity and is also a member of the Maryland Cybersecurity Center. In the past few years, he has been engaged in research on security threats to machine learning systems. But while a lot of the work in the field focuses on adversarial attacks, Dumitras and his colleagues were interested in finding all possible attack vectors that an adversary might use against machine learning systems. Their work has spanned various fields including hardware faults, cache side-channel attacks, software bugs, and other types of attacks on neural networks.
While working on the deep-shallow network with Kaya, Dumitras and his colleagues started thinking about the harmful ways the technique might be exploited.
We then wondered if an adversary could force the system to overthink; in other words, we wanted to see if the latency and energy savings provided by early exit models like SDN are robust against attacks, he said.
Dumitras started exploring slowdown attacks on shallow-deep networks with Ionut Modoranu, then a cybersecurity research intern at the University of Maryland. When the initial work showed promising results, Kaya and Sanghyun Hong, another Ph.D. student at the University of Maryland, joined the effort. Their research eventually culminated into the DeepSloth attack.
Like adversarial attacks, DeepSloth relies on carefully crafted input that manipulates the behavior of machine learning systems. However, while classic adversarial examples force the target model to make wrong predictions, DeepSloth disrupts computations. The DeepSloth attack slows down shallow-deep networks by preventing them from making early exits and forcing them to carry out the full computations of all layers.
Slowdown attacks have the potential ofnegating the benefits ofmulti-exit architectures, Dumitras said.These architectures can halve the energy consumption of a deep neural network model at inference time, and we showed that for any input we can craft a perturbation that wipes out those savings completely.
The researchers findings show that the DeepSloth attack can reduce the efficacy of the multi-exit neural networks by 90-100 percent. In the simplest scenario, this can cause a deep learning system to bleed memory and compute resources and become inefficient at serving users.
But in some cases, it can cause more serious harm. For example, one use of multi-exit architectures involves splitting a deep learning model between two endpoints. The first few layers of the neural network can be installed on an edge location, such as a wearable or IoT device. The deeper layers of the network are deployed on a cloud server. The edge side of the deep learning model takes care of the simple inputs that can be confidently computed in the first few layers. In cases where the edge side of the model does not reach a conclusive result, it defers further computations to the cloud.
In such a setting, the DeepSloth attack would force the deep learning model to send all inferences to the cloud. Aside from the extra energy and server resources wasted, the attack could have much more destructive impact.
In a scenario typical for IoT deployments, where the model is partitioned between edge devices and the cloud, DeepSloth amplifies the latency by 1.55X, negating the benefits of model partitioning, Dumitras said. This could cause the edge device to miss critical deadlines, for instance in an elderly monitoring program that uses AI to quickly detect accidents and call for help if necessary.
While the researchers made most of their tests on deep-shallow networks, they later found that the same technique would be effective on other types of early-exit models.
As with most works on machine learning security, the researchers first assumed that an attacker has full knowledge of the target model and has unlimited computing resources to craft DeepSloth attacks. But the criticality of an attack also depends on whether it can be staged in practical settings, where the adversary has partial knowledge of the target and limited resources.
In most adversarial attacks, the attacker needs to have full access to the model itself, basically, they have an exact copy of the victim model, Kaya told TechTalks. This, of course, is not practical in many settings where the victim model is protected from outside, for example with an API like Google Vision AI.
To develop a realistic evaluation of the attacker, the researchers simulated an adversary who doesnt have full knowledge of the target deep learning model. Instead, the attacker has asurrogatemodel on which he tests and tunes the attack. The attacker thentransfers the attack to the actual target. The researchers trained surrogate models that have different neural network architectures, different training sets, and even different early-exit mechanisms.
We find that the attacker that uses a surrogate can still cause slowdowns (between 20-50%) in the victim model, Kaya said.
Such transfer attacks are much more realistic than full-knowledge attacks, Kaya said. And as long as the adversary has a reasonable surrogate model, he will be able to attack a black-box model, such as a machine learning system served through a web API.
Attacking a surrogate is effective because neural networks that perform similar tasks (e.g., object classification) tend to learn similar features (e.g., shapes, edges, colors), Kaya said.
Dumitras says DeepSloth is just the first attack that works in this threat model, and he believes more devastating slowdown attacks will be discovered. He also pointed out that, aside from multi-exit architectures, other speed optimization mechanisms are vulnerable to slowdown attacks. His research team tested DeepSloth on SkipNet, a special optimization technique for convolutional neural networks (CNN). Their findings showed that DeepSloth examples crafted for multi-exit architecture also caused slowdowns in SkipNet models.
This suggests thatthe two different mechanisms might share a deeper vulnerability, yet to be characterized rigorously, Dumitras said. I believe that slowdown attacks may become an important threat in the future.
The researchers also believe that security must be baked into the machine learning research process.
I dont think any researcher today who is doing work on machine learning is ignorant of the basic security problems. Nowadays even introductory deep learning courses include recent threat models like adversarial examples, Kaya said.
The problem, Kaya believes, has to do with adjusting incentives. Progress is measured on standardized benchmarks and whoever develops a new technique uses these benchmarks and standard metrics to evaluate their method, he said, adding that reviewers who decide on the fate of a paper also look at whether the method is evaluated according to their claims on suitable benchmarks.
Of course, when a measure becomes a target, it ceases to be a good measure, he said.
Kaya believes there should be a shift in the incentives of publications and academia. Right now, academics have a luxury or burden to make perhaps unrealistic claims about the nature of their work, he says. If machine learning researchers acknowledge that their solution will never see the light of day, their paper might be rejected. But their research might serve other purposes.
For example, adversarial training causes large utility drops, has poor scalability, and is difficult to get right, limitations that are unacceptable for many machine learning applications. But Kaya points out that adversarial training can have benefits that have been overlooked, such as steering models toward becoming more interpretable.
One of the implications of too much focus on benchmarks is that most machine learning researchers dont examine the implications of their work when applied to real-world settings and realistic settings.
Our biggest problem is that we treat machine learning security as an academic problem right now. So the problems we study and the solutions we design are also academic, Kaya says. We dont know if any real-world attacker is interested in using adversarial examples or any real-world practitioner in defending against them.
Kaya believes the machine learning community should promote and encourage research in understanding the actual adversaries of machine learning systems rather than dreaming up our own adversaries.
And finally, he says that authors of machine learning papers should be encouraged to do their homework and find ways to break their own solutions, as he and his colleagues did with the shallow-deep networks. And researchers should be explicit and clear about the limits and potential threats of their machine learning models and techniques.
If we look at the papers proposing early-exit architectures, we see theres no effort to understand security risks although they claim that these solutions are of practical value, he says. If an industry practitioner finds these papers and implements these solutions, they are not warned about what can go wrong. Although groups like ours try to expose potential problems, we are less visible to a practitioner who wants to use an early-exit model. Even including a paragraph about the potential risks involved in a solution goes a long way.
More:
Machine learning security needs new perspectives and incentives - TechTalks
- HS-SPME/GCMS and Machine Learning Enable Volatile Fingerprinting and Classification of Commercial Vinegars - Chromatography Online - April 12th, 2026 [April 12th, 2026]
- Role of Artificial Intelligence and Machine Learning in Diagnosing Knee Lesions: Where Are We Now? - Cureus - April 12th, 2026 [April 12th, 2026]
- CMML2AML: machine-learning discovery of co-mutations and specific single mutations predictive of blast transformation in chronic myelomonocytic... - April 12th, 2026 [April 12th, 2026]
- Machine-learning-based reconstruction of Ming-dynasty defensive corridors in Yuxian - Nature - April 12th, 2026 [April 12th, 2026]
- Have you published a disruptive paper? New machine-learning tool helps you check - Physics World - April 12th, 2026 [April 12th, 2026]
- Microsoft is automatically updating Windows 11 24H2 to 25H2 using machine learning - TweakTown - April 5th, 2026 [April 5th, 2026]
- Inside the Magic of Machine Learning That Powers Enemy AI in Arc Raiders - 80 Level - April 3rd, 2026 [April 3rd, 2026]
- We analyzed Philly street scenes and identified signs of gentrification using machine learning trained on longtime residents observations - The... - April 3rd, 2026 [April 3rd, 2026]
- Boston University To Apply Machine Learning To Alzheimers Biomarker And Cognitive Data - Quantum Zeitgeist - April 3rd, 2026 [April 3rd, 2026]
- Sony buys machine-learning company to help "enhance gameplay visuals, improve rendering techniques, and unlock new levels of visual... - April 3rd, 2026 [April 3rd, 2026]
- The Machine Learning Stack Is Being Rebuilt From Scratch Here's What Developers Need to Know in 2026 - HackerNoon - April 3rd, 2026 [April 3rd, 2026]
- Closing the Revenue Gap: Leveraging Machine Learning to Solve the $260 Billion Denial Crisis - vocal.media - April 3rd, 2026 [April 3rd, 2026]
- Machine Learning for Pharmaceuticals Set to Witness Rapid - openPR.com - April 3rd, 2026 [April 3rd, 2026]
- You Must Address These 4 Concerns To Deploy Predictive AI - Machine Learning Week US - March 30th, 2026 [March 30th, 2026]
- Google and the rise of space-based machine learning - Latitude Media - March 30th, 2026 [March 30th, 2026]
- Researchers use machine learning and social network theory to identify formation patterns in digital forums - techxplore.com - March 30th, 2026 [March 30th, 2026]
- Mayo Clinic Study Uses Wearables and Machine Learning to Predict COPD Rehab Participation - HIT Consultant - March 30th, 2026 [March 30th, 2026]
- Machine learning at the edge in retail: constraints and gains - IoT News - March 26th, 2026 [March 26th, 2026]
- AI agents are flashy, but machine learning still pays the bills - TechRadar - March 26th, 2026 [March 26th, 2026]
- Single-cell imaging and machine learning reveal hidden coordination in algae's response to light stress - Phys.org - March 26th, 2026 [March 26th, 2026]
- Machine learning analysis of CT scans - National Institutes of Health (.gov) - March 22nd, 2026 [March 22nd, 2026]
- TransUnion Machine Learning Fraud Tools Tested Against Weak Share Price Momentum - simplywall.st - March 22nd, 2026 [March 22nd, 2026]
- Machine learning could help predict how people with depression respond to treatment - Medical Xpress - March 22nd, 2026 [March 22nd, 2026]
- KR approves machine learning-based fuel reduction methodology - Smart Maritime Network - March 22nd, 2026 [March 22nd, 2026]
- Available solar energy in Andalusia will increase through the end of the century, machine learning model finds - Tech Xplore - March 22nd, 2026 [March 22nd, 2026]
- How Machine Learning Is Reshaping Environmental Policy and Water Governance - Devdiscourse - March 22nd, 2026 [March 22nd, 2026]
- Chemistry student uses machine learning to transform gene therapy production - The University of North Carolina at Chapel Hill - March 13th, 2026 [March 13th, 2026]
- AI and Machine Learning - City of Brownsville to build smart city safety solution - Smart Cities World - March 13th, 2026 [March 13th, 2026]
- AI and Machine Learning - London borough overhauls public safety infrastructure - Smart Cities World - March 13th, 2026 [March 13th, 2026]
- Titan Technology Corp. Responds to Alberta Innovates RFP AI, Machine Learning and Automation Services - TradingView - March 13th, 2026 [March 13th, 2026]
- Vietnam FPT's AI automation solution secures new machine learning patent on overseas market - VnExpress International - March 13th, 2026 [March 13th, 2026]
- AI Healthcare Technology: The Power of Machine Learning Diagnosis in Modern Medicine - Tech Times - March 13th, 2026 [March 13th, 2026]
- Future Perspectives: Key Trends Shaping the Machine Learning Market in Financial Services Until 2030 - openPR.com - March 13th, 2026 [March 13th, 2026]
- How to Build an Autonomous Machine Learning Research Loop in Google Colab Using Andrej Karpathys AutoResearch Framework for Hyperparameter Discovery... - March 13th, 2026 [March 13th, 2026]
- The Arc in Arc Raiders have multiple "brains," and they all love pursuing you because Embark gives them "rewards" in real-time via... - March 13th, 2026 [March 13th, 2026]
- OnPoint AI to Present its Augmented Reality and Machine Learning Surgical Platform at the 2026 Canaccord Genuity Musculoskeletal Conference - Yahoo... - February 27th, 2026 [February 27th, 2026]
- TD Bank continues to develop AI, machine learning tools - Auto Finance News - February 27th, 2026 [February 27th, 2026]
- AI and Machine Learning - Tech companies team to scale private 5G and physical AI - Smart Cities World - February 27th, 2026 [February 27th, 2026]
- AI and Machine Learning in Dating Apps: Smarter Matchmaking Algorithms - Programming Insider - February 27th, 2026 [February 27th, 2026]
- Machine-Learning App Helps Anesthesiologists Navigate Critical Surgical Equipment in Real Time - Carle Illinois College of Medicine - February 24th, 2026 [February 24th, 2026]
- Fractal Launches PiEvolve, an Evolutionary Agentic Engine for Autonomous Machine Learning and Scientific Discovery - Yahoo Finance - February 24th, 2026 [February 24th, 2026]
- How Brain Data and Machine Learning Could Transform the Aging Industry - gritdaily.com - February 24th, 2026 [February 24th, 2026]
- AI and machine learning trends for Arizona leaders to watch in healthcare delivery and traveler services - AZ Big Media - February 24th, 2026 [February 24th, 2026]
- AI and machine learning are the future of Wi-Fi management: WBA report - Telecompetitor - February 22nd, 2026 [February 22nd, 2026]
- Machine learning streamlines the complexities of making better proteins - Science News - February 20th, 2026 [February 20th, 2026]
- WBA Publishes Guidance on Artificial Intelligence and Machine Learning for Intelligent Wi-Fi - ARC Advisory Group - February 20th, 2026 [February 20th, 2026]
- Machine learning-predicted insulin resistance is a risk factor for 12 types of cancer - Nature - February 20th, 2026 [February 20th, 2026]
- Exploring Machine Learning at the DOF - University of the Philippines Diliman - February 20th, 2026 [February 20th, 2026]
- AI and Machine Learning - Where US agencies are finding measurable value from AI - Smart Cities World - February 20th, 2026 [February 20th, 2026]
- Modeling visual perception of Chinese classical private gardens with image parsing and interpretable machine learning - Nature - February 16th, 2026 [February 16th, 2026]
- Analysis of Market Segments and Major Growth Areas in the Machine Learning (ML) Feature Lineage Tools Market - openPR.com - February 16th, 2026 [February 16th, 2026]
- Apple Makes One Of Its Largest Ever Acquisitions, Buys The Israeli Machine Learning Firm, Q.ai - Wccftech - February 1st, 2026 [February 1st, 2026]
- Keysights Machine Learning Toolkit to Speed Device Modeling and PDK Dev - All About Circuits - February 1st, 2026 [February 1st, 2026]
- University of Missouri Study: AI/Machine Learning Improves Cardiac Risk Prediction Accuracy - Quantum Zeitgeist - February 1st, 2026 [February 1st, 2026]
- How AI and Machine Learning Are Transforming Mobile Banking Apps - vocal.media - February 1st, 2026 [February 1st, 2026]
- Machine Learning in Production? What This Really Means - Towards Data Science - January 28th, 2026 [January 28th, 2026]
- Best Machine Learning Stocks of 2026 and How to Invest in Them - The Motley Fool - January 28th, 2026 [January 28th, 2026]
- Machine learning-based prediction of mortality risk from air pollution-induced acute coronary syndrome in the Western Pacific region - Nature - January 28th, 2026 [January 28th, 2026]
- Machine Learning Predicts the Strength of Carbonated Recycled Concrete - AZoBuild - January 28th, 2026 [January 28th, 2026]
- Vertiv Next Predict is a new AI-powered, managed service that combines field expertise and advanced machine learning algorithms to anticipate issues... - January 28th, 2026 [January 28th, 2026]
- Machine Learning in Network Security: The 2026 Firewall Shift - openPR.com - January 28th, 2026 [January 28th, 2026]
- Why IBMs New Machine-Learning Model Is a Big Deal for Next-Generation Chips - TipRanks - January 24th, 2026 [January 24th, 2026]
- A no-compromise amplifier solution: Synergy teams up with Wampler and Friedman to launch its machine-learning power amp and promises to change the... - January 24th, 2026 [January 24th, 2026]
- Our amplifier learns your cabinets impedance through controlled sweeps and continues to monitor it in real-time: Synergys Power Amp Machine-Learning... - January 24th, 2026 [January 24th, 2026]
- Machine Learning Studied to Predict Response to Advanced Overactive Bladder Therapies - Sandip Vasavada - UroToday - January 24th, 2026 [January 24th, 2026]
- Blending Education, Machine Learning to Detect IV Fluid Contaminated CBCs, With Carly Maucione, MD - HCPLive - January 24th, 2026 [January 24th, 2026]
- Why its critical to move beyond overly aggregated machine-learning metrics - MIT News - January 24th, 2026 [January 24th, 2026]
- Machine Learning Lends a Helping Hand to Prosthetics - AIP Publishing LLC - January 24th, 2026 [January 24th, 2026]
- Hassan Taher Explains the Fundamentals of Machine Learning and Its Relationship to AI - mitechnews.com - January 24th, 2026 [January 24th, 2026]
- Keysight targets faster PDK development with machine learning toolkit - eeNews Europe - January 24th, 2026 [January 24th, 2026]
- Training and external validation of machine learning supervised prognostic models of upper tract urothelial cancer (UTUC) after nephroureterectomy -... - January 24th, 2026 [January 24th, 2026]
- Age matters: a narrative review and machine learning analysis on shared and separate multidimensional risk domains for early and late onset suicidal... - January 24th, 2026 [January 24th, 2026]
- Uncovering Hidden IV Fluid Contamination Through Machine Learning, With Carly Maucione, MD - HCPLive - January 24th, 2026 [January 24th, 2026]
- Machine learning identifies factors that may determine the age of onset of Huntington's disease - Medical Xpress - January 24th, 2026 [January 24th, 2026]
- AI and Machine Learning - WEF expands Fourth Industrial Revolution Network - Smart Cities World - January 24th, 2026 [January 24th, 2026]
- Machine-learning analysis reclassifies armed conflicts into three new archetypes - The Brighter Side of News - January 24th, 2026 [January 24th, 2026]
- Machine learning and AI the future of drought monitoring in Canada - sasktoday.ca - January 24th, 2026 [January 24th, 2026]
- Machine learning revolutionises the development of nanocomposite membranes for CO capture - European Coatings - January 24th, 2026 [January 24th, 2026]
- AI and Machine Learning - Leading data infrastructure is helping power better lives in Sunderland - Smart Cities World - January 24th, 2026 [January 24th, 2026]
- How banks are responsibly embedding machine learning and GenAI into AML surveillance - Compliance Week - January 20th, 2026 [January 20th, 2026]