Revisiting Differentially Private Hyper-parameter Tuning
Zihang Xiang
Network and Distributed System Security (NDSS) Symposium 2026 · Day 1 · Microarchitectural Security
When training machine learning models with **differential privacy (DP)**, practitioners do not simply train once -- they run the training process multiple times with different hyperparameters (learning rates, clipping thresholds, batch sizes) and select the best model. This selection process itself can leak private information, but how much? This talk presents both a rigorous **privacy audit** and an improved **theoretical analysis** showing that previous upper bounds on the privacy cost of hyperparameter tuning were far too loose. The new analysis reduces the estimated privacy cost by **more than 50%**, allowing practitioners to explore significantly more hyperparameter configurations within the same privacy budget.
AI review
A theoretical privacy accounting improvement that reduces the estimated privacy cost of hyperparameter tuning in DPSGD by 50%+ through f-DP analysis. Solid math, but this is pure DP theory with zero offensive security content -- no exploits, no attacks, no tools. If you're not building privacy-preserving ML pipelines, there's nothing here for you.