LOKI: Large-scale Data Reconstruction Attack against Federated Learning through Model Manipulation
Joshua C. Zhao, Atul Sharma, Ahmed Roushdy Elkordy, Yahya H. Ezzeldin, Salman Avestimehr, Saurabh Bagchi
IEEE Symposium on Security and Privacy 2024 · Day 1 · Continental Ballroom 5
This article delves into LOKI, a groundbreaking data reconstruction attack designed to compromise the privacy of Federated Learning (FL) systems through sophisticated model manipulation. Presented by Joshua C. Zhao and his collaborators from Purdue University and the University of Southern California at IEEE S&P, LOKI demonstrates that even with robust privacy-enhancing mechanisms like **Federated Averaging (FedAvg)** and **Secure Aggregation (SecAgg)** in place, sensitive user data can still be extracted at an unprecedented scale. The research challenges the prevailing assumption that these defenses are sufficient to prevent data leakage in decentralized machine learning environments.
AI review
This research presents LOKI, a groundbreaking data reconstruction attack that shatters previous scalability barriers in Federated Learning. By cleverly employing split scaling and a convolutional scaling factor, LOKI achieves unprecedented leakage rates against FedAvg and Secure Aggregation, fundamentally challenging the current understanding of FL privacy. This is a critical wake-up call for anyone building or deploying FL systems.