DPAdapter: Improving Differentially Private Deep Learning through Noise Tolerance Pre-training

Zihao Wang, Zhikun Zhang, John Mitchell, Haixu Tang, XiaoFeng Wang

33rd USENIX Security Symposium · Day 1 · USENIX Security '24

In the realm of deep learning, where models are increasingly deployed to process sensitive information, ensuring data privacy has become paramount. This talk introduces **DPAdapter**, a novel methodology designed to significantly enhance the utility of **Differentially Private Machine Learning (DPM)**, specifically in the context of **Differentially Private Stochastic Gradient Descent (DPSGD)**. The core innovation of DPAdapter lies in optimizing the *pre-training* phase of deep learning models, making them inherently more robust to the noise injection required for differential privacy.

AI review

This talk presents DPAdapter, a novel pre-training methodology that significantly enhances the utility of Differentially Private Machine Learning (DPM) by creating models inherently more robust to the noise injection required for differential privacy. By optimizing the loss landscape upstream, DPAdapter provides a foundational improvement, boosting accuracy by up to 16% and complementing existing DPM techniques. This is a crucial step towards practical, high-utility privacy-preserving AI for MLaaS.

Watch on YouTube