You Can Use But Cannot Recognize: Preserving Visual Privacy in Deep Neural Networks
Qiushi Li
Network and Distributed System Security (NDSS) Symposium 2024 · Day 3 · Adversarial ML
The proliferation of **Deep Neural Networks (DNNs)** has ushered in an era of transformative advancements across diverse domains, from powering autonomous vehicles to revolutionizing medical diagnostics. However, the insatiable demand of these powerful models for vast quantities of image data has simultaneously exacerbated critical privacy concerns. Personal and sensitive information, including facial characteristics, license plate numbers, and confidential patient data, can be inadvertently exposed or reconstructed from the data used in DNN training and inference. Existing privacy-preserving techniques, while well-intentioned, often prove inadequate for the unique challenges posed by visual data, either by degrading model performance, incurring prohibitively high computational costs, or failing to truly obscure visual features from human perception.