DeepTheft: Stealing DNN Model Architectures through Power Side Channel

Yansong Gao, Huming Qiu, Zhi Zhang, Binghui Wang, Hua Ma, Alsharif Abuadbba

IEEE Symposium on Security and Privacy 2024 · Day 3 · Continental Ballroom 5

In the rapidly expanding landscape of cloud-based machine learning services, Deep Neural Network (DNN) models are increasingly deployed to provide inference capabilities for various applications. While this paradigm offers scalability and accessibility, it simultaneously introduces a new frontier for intellectual property theft and adversarial attacks. The talk "DeepTheft: Stealing DNN Model Architectures through Power Side Channel," presented by Yansong Gao and his collaborators at IEEE S&P, unveils a sophisticated learning-based framework capable of accurately recovering DNN model architectures, including layer types and hyperparameters, by exploiting power and frequency side channels on general-purpose CPUs.

AI review

This research presents a groundbreaking and highly accurate method, DeepTheft, for stealing DNN model architectures through low-resolution power and frequency side channels on general CPUs in cloud environments. Its novel two-step strategy and hybrid metamodel overcome prior limitations, achieving near-perfect architectural recovery with significant implications for cloud providers and AI IP protection. This is a critical development that reshapes our understanding of model security in multi-tenant systems.

Watch on YouTube