Understanding and Benchmarking the Commonality of Adversarial Examples

Ruiwen He, Yushi Cheng, Junning Ze, Xiaoyu Ji, Wenyuan Xu

IEEE Symposium on Security and Privacy 2024 · Day 2 · Continental Ballroom 5

In an era where intelligent voice devices are increasingly integrated into critical applications, the security of speech content has emerged as a paramount concern. This talk, presented by Ruiwen He from USS Lab at Chan University, delves into the pervasive threat of **adversarial examples (AEs)** against Automatic Speech Recognition (ASR) systems. The research aims to unravel the underlying mechanisms behind these attacks by systematically identifying and benchmarking the common properties of adversarial audio.

AI review

This research fundamentally advances our understanding of adversarial examples (AEs) in ASR by meticulously identifying four common, distinctive properties shared across diverse attack methods. Moving beyond merely demonstrating attacks, the work provides critical insights into *why* AEs succeed and *how* they differ from natural speech, forming a robust foundation for building generalized and effective defense mechanisms.

Watch on YouTube