Gradients Look Alike: Sensitivity is Often Overestimated in DP-SGD

Anvith Thudi, Hengrui Jia, Casey Meehan, Ilia Shumailov, Nicolas Papernot

33rd USENIX Security Symposium · Day 1 · USENIX Security '24

In this USENIX Security '24 talk, Anvith Thudi presented a groundbreaking analysis challenging the conventional understanding of privacy guarantees in **Differentially Private Stochastic Gradient Descent (DP-SGD)**. The work, titled "Gradients Look Alike: Sensitivity is Often Overestimated in DP-SGD," introduces a novel data-dependent perspective, arguing that the worst-case privacy bounds typically derived for DP-SGD can be overly pessimistic for a significant portion of training data points. This research provides the first **per-instance differential privacy (DP) analysis** for DP-SGD, revealing that many individual data points are considerably harder to attack than the worst-case scenarios suggest.

AI review

This talk presents a groundbreaking per-instance differential privacy analysis for DP-SGD, challenging the conventional wisdom that worst-case bounds accurately reflect privacy leakage. By introducing sensitivity distributions and a novel composition theorem, the research demonstrates that many data points are significantly more private than previously assumed, fundamentally altering our understanding of privacy-utility trade-offs in ML. This is critical, non-BS research that will define future discussions.

Watch on YouTube