Cascading and Proxy Membership Inference Attacks
Yuntao Du
Network and Distributed System Security (NDSS) Symposium 2026 · Day 2 · AI Security
Membership inference attacks (MIA) determine whether specific data was used to train a machine learning model. This talk introduces two new attack strategies that fundamentally improve MIA effectiveness by exploiting a previously overlooked property: the **statistical dependence between membership decisions** of different instances. The **Cascading MIA** for adaptive settings iteratively determines membership by fixing high-confidence decisions first and retraining shadow models with that prior knowledge, boosting state-of-the-art attack performance by **several times**. The **Proxy MIA** for non-adaptive settings replaces unavailable "in-model" behaviors with proxy points from the adversary's own data, enabling the powerful likelihood ratio test without per-query shadow model retraining.
AI review
A well-constructed theoretical contribution to membership inference that introduces two genuinely novel attack strategies. Cascading MIA's insight that membership decisions are dependent (not independent) is clean and the several-fold improvement over LiRA is significant. Proxy MIA's trick of using similar instances to approximate unavailable in-model behaviors is simple but effective. However, this is ML privacy research, not security research -- no real systems are attacked, no real data is extracted.