Attacking Byzantine Robust Aggregation in High Dimensions
Sarthak Choudhary, Aashish Kolluri, Prateek Saxena
IEEE Symposium on Security and Privacy 2024 · Day 1 · Continental Ballroom 5
This talk, presented by Aashish Kolluri, Sarthak Choudhary, and Prateek Saxena, delves into critical vulnerabilities within **Byzantine robust aggregation** mechanisms, particularly in high-dimensional settings. Byzantine robust aggregation is a fundamental problem in distributed computing and machine learning, focusing on how to compute an accurate average of data points when a fraction of those points might be arbitrarily corrupted by an adversary. This problem is particularly pertinent to the robustness of **Stochastic Gradient Descent (SGD)** algorithms, which are foundational to training machine learning models, especially in distributed environments where malicious actors can poison gradients to manipulate model behavior.
AI review
This work uncovers a critical theoretical limitation and a practical vulnerability in Byzantine robust aggregation for high-dimensional ML. The Hydra attack elegantly demonstrates how optimized, filtering-based defenses fail to provide dimension-independent bias, leading to catastrophic model degradation with minimal corruption. This is essential for anyone building secure distributed ML systems.