Position: Certified Robustness Does Not (Yet) Imply Model Security
Andrew C. Cullen, Paul MONTAGUE, Sarah Erfani, Benjamin Rubinstein
International Conference on Machine Learning 2025 · Oral
In this thought-provoking position paper presented at ICML 2025, Dr. Andrew Cullen, alongside collaborators Paul Montague, Sarah Erfani, and Benjamin Rubinstein from the University of Melbourne and DST Group in Australia, challenges the prevailing perception of **certified robustness** in machine learning. The talk, titled "Certified Robustness Does Not (Yet) Imply Model Security," argues that while certified defenses offer theoretical guarantees against adversarial attacks, their current framing and application fall short of providing genuine security for real-world deployed systems. The core message is that the community's focus on purely technical metrics, often divorced from practical threat models and human factors, creates a dangerous misalignment between perceived and actual security.
AI review
A position paper that correctly identifies real tensions in the certified robustness literature — particularly the gap between mathematical guarantees and deployment security — but fails to convert those observations into the kind of rigorous, falsifiable claims or formal framework that would make them actionable or lasting. The 'certification contradiction' and the critique of L2-norm-centric metrics are not novel to the community; the paper's value lies in aggregating and foregrounding these concerns, not in resolving them. Without a formal threat model taxonomy, new theorems, or even…