Yes, One-Bit-Flip Matters! Universal DNN Model Inference Depletion with Runtime Code Fault Injection
Shaofeng Li, Xinyu Wang, Minhui Xue, Haojin Zhu, Zhi Zhang, Yansong Gao, Wen Wu, Xuemin (Sherman) Shen
33rd USENIX Security Symposium · Day 1 · USENIX Security '24
In a groundbreaking presentation at USENIX Security '24, Shaofeng Li and his co-authors unveiled a novel and alarming attack vector against Deep Neural Network (DNN) models, demonstrating that even a single bit flip in the underlying machine learning library code can lead to catastrophic inference failures. Titled "Yes, One-Bit-Flip Matters! Universal DNN Model Inference Depletion with Runtime Code Fault Injection," the talk challenges conventional wisdom regarding DNN robustness and highlights a critical, often overlooked, vulnerability in the hardware-software stack.
AI review
This research unveils a truly novel and alarming attack vector against DNNs, demonstrating that a single bit flip in critical library code, triggered by Rowhammer from an unprivileged process, can cause universal inference depletion. It shatters the myth of DNN robustness against minor hardware faults and demands immediate, serious attention from the entire industry. This isn't just theory; it's a blueprint for catastrophic failure.