FLShield: A Validation Based Federated Learning Framework to Defend Against Poisoning Attacks

Ehsanul Kabir, Zeyu Song, Md Rafi Ur Rashid, Shagufta Mehnaz

IEEE Symposium on Security and Privacy 2024 · Day 2 · Continental Ballroom 5

This talk introduces **FLShield**, an innovative framework designed to enhance the security of Federated Learning (FL) systems against a spectrum of poisoning attacks. Presented by Ehsanul Kabir from Pennsylvania State University, alongside co-authors Zeyu Song, Md Rafi Ur Rashid, and Shagufta Mehnaz, FLShield tackles the critical challenge of maintaining data privacy and computational efficiency while safeguarding the integrity of global models. Federated Learning is rapidly transforming data analysis in safety-critical domains, from healthcare to finance, by enabling collaborative model training without direct access to raw, sensitive user data. However, this decentralized paradigm introduces significant vulnerabilities, particularly to malicious participants who can inject poisoned data or model updates.

AI review

FLShield tackles critical poisoning attacks in Federated Learning by introducing a novel client-side validation framework. It cleverly resolves the privacy and integrity dilemmas inherent in decentralized validation using representative models and a new metric, LIPC, offering a robust and practical defense that significantly outperforms current state-of-the-art.

Watch on YouTube