Convergent Privacy Framework for Multi-layer GNNs through Contractive Message Passing

Yu Zheng

Network and Distributed System Security (NDSS) Symposium 2026 · Day 1 · Privacy & Measurement

Graph Neural Networks (GNNs) are increasingly used for sensitive applications -- from predicting Alzheimer's disease to analyzing social networks and molecular structures -- but they are vulnerable to privacy attacks like **membership inference** that can reveal whether specific individuals were in the training data. Applying differential privacy (DP) to multi-layer GNNs has been problematic because existing approaches add noise that grows **linearly with the number of layers**, destroying model utility for deeper networks. This talk introduces **Curable**, a system that exploits the **over-smoothing** phenomenon (normally considered a GNN weakness) as a privacy feature. By designing **contractive message passing layers** where node representations naturally converge, the noise required for DP is bounded and convergent rather than growing linearly, achieving up to **40% accuracy improvement** over existing DP-GNN methods on some datasets.

AI review

A clever reframing of the over-smoothing phenomenon in deep GNNs as a privacy feature rather than a bug, enabling convergent noise in DP-GNN training. The 40% accuracy improvement over baselines for node-level DP is impressive. However, this is a privacy-preserving ML paper with no offensive security content -- no new attacks, no exploitation techniques, no defensive tools for security practitioners.

Watch on YouTube