SHERPA: Explainable Robust Algorithms for Privacy-preserved Federated Learning in Future Networks to Defend against Data Poisoning Attacks
Chamara Sandeepa, Bartlomiej Siniarski, Shen Wang, Madhusanka Liyanage
IEEE Symposium on Security and Privacy 2024 · Day 3 · Continental Ballroom 6
Federated Learning (FL) has emerged as a powerful paradigm for collaborative machine learning, enabling multiple clients to jointly train a global model without sharing their raw data. This distributed approach preserves data privacy by keeping local data on client devices, making it particularly attractive for sensitive applications in healthcare, finance, and future networks. However, the decentralized nature of FL also introduces significant security vulnerabilities, primarily through **data poisoning attacks**. Malicious clients can inject carefully crafted, poisoned data into their local training sets, leading to corrupted local models that, when aggregated, degrade the performance of the global model or implant insidious backdoors.
AI review
SHERPA introduces a novel, explainable framework for detecting data poisoning in Federated Learning, utilizing SHAP values and HDBScan clustering. This approach offers transparent identification of malicious client behavior, moving beyond opaque heuristics, and effectively mitigates various poisoning attacks, including those amplifying privacy threats.