Few-shot Unlearning

Youngsik Yoon, Jinhwan Nam, Hyojeong Yun, Jaeho Lee, Dongwoo Kim, Jungseul Ok

IEEE Symposium on Security and Privacy 2024 · Day 3 · Continental Ballroom 4

In the rapidly evolving landscape of machine learning, the ability to selectively remove specific data's influence from a trained model—a process known as **machine unlearning**—has become increasingly critical. This talk, "Few-shot Unlearning," presented by Youngsik Yoon and collaborators from POSTECH at IEEE S&P, tackles a significant challenge within this domain: performing unlearning when only a limited number of target data samples are available. Traditional unlearning methods often assume complete access to either the original training dataset or the full set of data intended for erasure, which is often an impractical assumption in real-world scenarios due to memory constraints, privacy regulations (e.g., GDPR's "right to be forgotten"), or the sheer volume of data.

AI review

This work formalizes few-shot unlearning with critical explicit intentions (privacy, mislabel correction), addressing a major real-world challenge. The novel three-step framework, particularly the model inversion and intention-driven strategies, achieves near-oracle performance with minimal data, a significant practical advancement for compliance and model integrity.

Watch on YouTube