LACMUS: Latent Concept Masking for General Robustness Enhancement of DNNs
Shuo Wang, Hongsheng Hu, Jiamin Chang, Benjamin Zi Hao Zhao, Minhui Xue
IEEE Symposium on Security and Privacy 2024 · Day 2 · Continental Ballroom 5
The talk "LACMUS: Latent Concept Masking for General Robustness Enhancement of DNNs" presented by Hongsheng Hu at the IEEE S&P conference, introduces a novel framework designed to improve the robustness of deep neural networks (DNNs) against a wide array of adversarial attacks and distribution shifts. The presentation highlights a critical and persistent challenge in machine learning: despite significant advancements in performance, DNNs often lack **robustness**, making them vulnerable to subtle perturbations or changes in input conditions. This vulnerability can lead to critical misclassifications in real-world applications, ranging from autonomous driving to medical diagnostics.
AI review
LACMUS presents a genuinely novel framework for enhancing DNN robustness by generating 'conceptual adversarial examples' through latent concept masking. This attack-agnostic, model-agnostic approach efficiently addresses critical limitations of traditional adversarial training, notably the robustness-utility trade-off and high data requirements. The work offers a significant, practical step toward building more resilient AI systems for real-world deployment.