Efficient Privacy Auditing in Federated Learning

Hongyan Chang, Brandon Edwards, Anindya S. Paul, Reza Shokri

33rd USENIX Security Symposium · Day 1 · USENIX Security '24

Federated Learning (FL) has emerged as a prominent distributed machine learning paradigm, enabling multiple parties to collaboratively train a global model without directly sharing their raw local data. While FL offers a significant privacy advantage over centralized training, it is not immune to privacy risks. Sensitive information from individual training datasets can still be inadvertently encoded within the local model updates shared with a central server, and subsequently, with other participating parties through the global model. This talk, presented by Hongyan Chang at USENIX Security '24, addresses the critical challenge of efficiently auditing these privacy risks in FL environments.

AI review

This talk introduces a highly efficient and effective "slope signal" algorithm for continuous privacy auditing in Federated Learning. By leveraging the dynamic changes in model performance over time, it overcomes the prohibitive computational costs of prior membership inference attacks, enabling real-time risk assessment and proactive privacy management for FL participants. This is a critical defensive innovation that provides actionable intelligence.

Watch on YouTube