Defending Against Data Reconstruction Attacks in Federated Learning: An Information Theory Approach
Qi Tan, Qi Li, Yi Zhao, Zhuotao Liu, Xiaobing Guo, Ke Xu
33rd USENIX Security Symposium · Day 1 · USENIX Security '24
In an era increasingly defined by data-driven decision-making, **Federated Learning (FL)** has emerged as a critical paradigm, promising to unlock the power of distributed data while simultaneously safeguarding privacy. Unlike traditional machine learning, where raw data from various parties is aggregated into a central repository for training, FL enables each participant to train models locally and only share aggregated model parameters or gradients with a central server. This distributed approach aims to mitigate the "isolated data island" problem and address privacy concerns by keeping sensitive raw data on local devices. However, this talk from USENIX Security '24, presented by Mi Chen on behalf of a team of researchers from T University, reveals a significant Achilles' heel in this promising technology: **data reconstruction attacks**.
AI review
This talk delivers a foundational re-evaluation of Federated Learning privacy, presenting a novel information theory framework to precisely quantify and control data leakage against reconstruction attacks. It offers a theoretically sound and practically efficient defense that significantly outperforms existing methods like Differential Privacy without sacrificing model accuracy, making secure FL deployments viable for complex models.