Auditing $f$-differential privacy in one run

Saeed Mahloujifar, Luca Melis, Kamalika Chaudhuri

International Conference on Machine Learning 2025 · Oral

In an era where state-of-the-art machine learning models are increasingly vulnerable to sophisticated privacy attacks, **differential privacy (DP)** has emerged as the gold standard for providing robust theoretical guarantees. However, the practical verification of these guarantees, particularly for large-scale models, presents significant challenges. This talk, presented by Saeed Mahloujifar from Meta, alongside collaborators Luca Melis and Kamalika Chaudhuri, introduces a novel method for auditing **$f$-differential privacy** in a single training run, marking a significant advancement in the field of privacy verification.

AI review

A competent and honest contribution to the privacy auditing literature. Mahloujifar, Melis, and Chaudhuri extend the Steinke et al. one-run auditing framework to the f-DP setting, achieve tighter empirical lower bounds on DP-SGD, and resolve a specific open question about the Gaussian mechanism. The core idea — generalizing the guessing game to k samples per bucket and lifting the analysis into the f-DP tradeoff function space — is technically natural and the convergence result for the Gaussian mechanism is the strongest claim here. The work is well-situated in the literature and the…