General-Purpose f-DP Estimation and Auditing in a Black-Box Setting
Önder Askin
34th USENIX Security Symposium (USENIX Security '25) · Day 2 · Privacy 1: Differential Privacy and Audit
This talk, presented by Önder Askin from the University of Bochum, introduces novel methods for estimating and auditing **f-differential privacy (f-DP)** in a **black-box setting**. f-DP is a powerful and increasingly popular variant of differential privacy, offering easily interpretable privacy guarantees and suitability for complex privacy-preserving system design, particularly in auditing private machine learning models. The core challenge addressed is how to verify the privacy claims of a mechanism when its internal code or structure is unknown, a common scenario in real-world deployments involving proprietary software or third-party libraries.
AI review
Solid, technically grounded academic work that fills a real gap: black-box auditing of f-DP with statistical guarantees, directly applicable to the growing mess of 'we use differential privacy' claims from ML vendors and cloud providers. The perturbed LRT construction and uniform convergence result are non-trivial contributions, and the auditing framework with confidence regions is exactly what the space needs.