Less is More: Revisiting the Gaussian Mechanism for Differential Privacy
Tianxi Ji, Pan Li
33rd USENIX Security Symposium · Day 1 · USENIX Security '24
Differential Privacy (DP) stands as a foundational framework for privacy-preserving data analysis, machine learning, and AI. At its core, DP aims to quantify and limit the privacy loss incurred when statistical queries are performed on sensitive datasets. A cornerstone mechanism for achieving DP, particularly for numerical computations, is the **Gaussian mechanism**, which adds random noise drawn from a Gaussian distribution to query results. However, as demonstrated by Tianxi Ji and Pan Li in their USENIX Security '24 talk, "Less is More: Revisiting the Gaussian Mechanism for Differential Privacy," existing Gaussian mechanisms suffer from a significant limitation: their accuracy loss scales linearly with the dimensionality of the query, a phenomenon they term the "curse of dimensionality."
AI review
This talk introduces the R1SM-SMG mechanism, a novel approach to the Gaussian mechanism for Differential Privacy. It fundamentally overturns the "curse of dimensionality" that plagues existing methods, achieving accuracy loss that decreases with dimensionality rather than increases. This geometric re-evaluation of noise addition offers significantly improved utility and stability for high-dimensional private data release.