A Duty to Forget, a Right to be Assured? Exposing Vulnerabilities in Machine Unlearning Services

Hongsheng Hu

Network and Distributed System Security (NDSS) Symposium 2024 · Day 3 · Privacy-Preserving ML

This talk, presented by Hongsheng Hu at the NDSS Symposium, delves into a critical and emerging security challenge within **Machine Learning as a Service (MLaaS)** environments: the vulnerability of **machine unlearning** services. In an era governed by data privacy regulations such as GDPR and CCPA, service providers are legally obligated to remove a user's data from their machine learning models upon request. This "Right to be Forgotten" necessitates robust unlearning mechanisms. Traditionally, this meant expensive and time-consuming retraining of models from scratch, a process computationally prohibitive for modern deep neural networks with millions or billions of parameters. The advent of MLaaS, while offering benefits like privacy, accessibility, and cost-effectiveness, introduces a unique constraint: the service provider often lacks direct access to the original training dataset, rendering traditional retraining-based unlearning impractical.

Watch on YouTube