SVDefense: Effective Defense against Gradient Inversion Attacks via Singular Value Decomposition
Chenxiang Luo
Network and Distributed System Security (NDSS) Symposium 2026 · Day 2 · Privacy & Measurement
Federated learning promises privacy by keeping training data local and only sharing model gradients with the central server. However, **gradient inversion attacks (GIA)** can reconstruct raw user data from these uploaded gradients. Worse, this talk demonstrates that existing defenses against GIA -- including pruning, perturbation, and compression-based methods -- are vulnerable to **adaptive attackers** who know the defense details and can circumvent them. The researchers propose **SVDefense**, a novel defense based on **truncated singular value decomposition (SVD)** that irreversibly transforms all gradients while preserving model utility.
AI review
A defense-only paper addressing gradient inversion attacks in federated learning. The observation that existing defenses fail against adaptive attackers is useful, but the core contribution -- applying truncated SVD to gradients -- is incremental ML defense engineering. No offensive contribution, no novel attack, no exploitation. The class imbalance vulnerability finding is the most interesting element but is treated as a secondary observation rather than a primary contribution.