SoK: Efficiency Robustness of Dynamic Deep Learning Systems
Ravishka Rathnasuriya
34th USENIX Security Symposium (USENIX Security '25) · Day 2 · ML and AI Security 2
This article delves into the critical and emerging field of **efficiency robustness** in **dynamic deep learning (DDL) systems**, based on the USENIX Security talk by Ravishka Rathnasuriya from the University of Texas at Dallas. The presentation introduces a novel perspective on adversarial machine learning, shifting focus from traditional accuracy-based attacks to **computational cost inflation**. As deep learning models become ubiquitous, particularly in real-time, resource-constrained, and edge environments, their efficiency is paramount. This talk highlights a fundamental vulnerability: the very adaptivity that makes DDL systems efficient can be exploited by adversaries to force models into computationally expensive execution paths, leading to significant performance degradation, resource exhaustion, and potential denial-of-service (DoS) attacks.
AI review
Solid SoK that fills a genuine gap by formalizing 'efficiency robustness' as a first-class security property and building a coherent taxonomy across DDL attack surfaces. The contribution is real, but the format is inherently synthetic — this is a literature organization exercise, not novel attack research — and the talk's impact scales directly with how well the speaker's framing outpaces just reading the paper.