SoK: Gradient Inversion Attacks in Federated Learning
Vincenzo Carletti
34th USENIX Security Symposium (USENIX Security '25) · Day 3 · ML and AI Security 3: Backdoors, Poisoning, Unlearning
This talk presents a comprehensive Systematization of Knowledge (SoK) regarding **gradient inversion attacks (GIAs)** within **federated learning (FL)** environments. Delivered by Joseph Varela from the University of Salerno, the presentation distills insights from an extensive review of 107 publications spanning from 2016 to 2025. The core focus is on understanding how adversaries can reconstruct sensitive client data from the seemingly innocuous model updates exchanged during collaborative FL training, despite FL's promise of enhanced privacy.
AI review
A competent SoK that does the literature-organization work someone needed to do — 107 papers taxonomized into threat models, attack families, defenses, and metrics. Solid contribution for the ML privacy research community, but it's a survey paper dressed up as a conference talk, and the inherent limitations of that format show: no novel attacks, no new defenses, no empirical results that weren't already in the cited literature.