Towards Lifecycle Unlearning Commitment Management: Measuring Sample-level Unlearning Completeness

Cheng-Long Wang

34th USENIX Security Symposium (USENIX Security '25) · Day 3 · ML and AI Security 3: Backdoors, Poisoning, Unlearning

In the rapidly evolving landscape of artificial intelligence, the ability to train powerful machine learning models has become commonplace. However, an equally critical, yet often overlooked, challenge lies in the inverse process: **machine unlearning**. This talk, presented by Cheng-Long Wang at USENIX Security, delves into a novel framework for accurately measuring the completeness of machine unlearning at a granular, sample-by-sample level. Machine unlearning aims to update a trained model such that it effectively "forgets" specific data points, behaving precisely as if those data were never used in its initial training, all without the prohibitive cost of retraining the entire model from scratch.

AI review

Legitimate academic security research on a real problem — verifying that machine unlearning actually works — with a technically coherent contribution in the IM framework and Bounded Google map. Solid USENIX-tier paper material, but as a conference talk it reads like a paper walkthrough rather than a compelling presentation, and the threat model framing is more compliance-adjacent than security-first.

Watch on YouTube