Automatic Adversarial Adaption for Stealthy Poisoning Attacks in Federated Learning
Torsten Krauß
Network and Distributed System Security (NDSS) Symposium 2024 · Day 1 · Poisoning Attacks
Federated Learning (FL) has emerged as a transformative paradigm for collaboratively training machine learning models across distributed datasets, offering compelling advantages in data privacy, communication efficiency, and model performance. However, this distributed architecture inherently introduces new security vulnerabilities, particularly to **poisoning attacks**. In such attacks, malicious clients inject carefully crafted updates to compromise the integrity and behavior of the aggregated global model. The talk "Automatic Adversarial Adaption for Stealthy Poisoning Attacks in Federated Learning" by Torsten Krauß introduces **AutoAdapt**, a novel and highly efficient method designed to enable adversaries to launch **stealthy poisoning attacks** that can effectively evade existing FL defense mechanisms.