Test-Time Poisoning Attacks Against Test-Time Adaptation Models

Tianshuo Cong, Xinlei He, Yun Shen, Yang Zhang

IEEE Symposium on Security and Privacy 2024 · Day 1 · Continental Ballroom 5

The talk "Test-Time Poisoning Attacks Against Test-Time Adaptation Models" by Tianshuo Cong and colleagues from IEEE S&P presents a novel and concerning vulnerability in an emerging class of machine learning models: **Test-Time Adaptation (TTA)**. Deep learning models, while achieving remarkable performance in controlled environments, often struggle when deployed in real-world scenarios due to **distribution shift** – a phenomenon where the data encountered during inference differs statistically from the data used for training. TTA methods have gained significant traction as a solution to this problem, enabling models to dynamically adjust their parameters using test-time data, thereby enhancing generalization and robustness.

AI review

This isn't just another ML security talk; it's a critical disclosure. This research uncovers the first-ever test-time poisoning attacks against Test-Time Adaptation (TTA) models, a fundamental vulnerability in an emerging AI paradigm. It's a serious wake-up call for anyone deploying adaptive AI in sensitive applications, proving traditional defenses are useless and demanding immediate re-evaluation.

Watch on YouTube