DP-BREM: Differentially-Private and Byzantine-Robust Federated Learning with Client Momentum

Xiaolan Gu

34th USENIX Security Symposium (USENIX Security '25) · Day 2 · ML and AI Privacy 1: Federated Learning and Protecting Data

Federated Learning (FL) has emerged as a crucial paradigm for collaborative machine learning, enabling multiple parties to train a shared model without direct data exchange. While FL inherently offers privacy benefits by keeping raw data local, the model updates themselves can inadvertently leak sensitive information or become vectors for malicious attacks. Xiaolan Gu's presentation, "DP-BREM: Differentially-Private and Byzantine-Robust Federated Learning with Client Momentum," introduces a novel framework designed to tackle these dual challenges of privacy preservation and robustness against adversarial clients in FL environments.

AI review

Legitimate academic contribution to a hard problem — combining differential privacy and Byzantine robustness in federated learning without the usual privacy-utility cliff. The additive error interaction result is the real finding here, and LFH's compatibility with DP-SGD is a genuine insight. But this is a USENIX Security paper presentation, not a security practitioner talk, and it reads like exactly that.

Watch on YouTube