Preserving Node-level Privacy in Graph Neural Networks
Zihang Xiang, Tianhao Wang, Di Wang
IEEE Symposium on Security and Privacy 2024 · Day 3 · Continental Ballroom 6
In an era of ubiquitous data, information often manifests in complex graph structures, such as social networks. The past few years have witnessed a surge in the popularity of Graph Neural Networks (GNNs) within the machine learning community, largely due to their exceptional performance on various graph-related tasks like node classification and link prediction. GNNs operate by iteratively aggregating information (messages) from neighboring nodes and updating node representation vectors, which are then fed into downstream tasks. Despite their powerful capabilities, GNNs introduce significant privacy challenges, particularly concerning the sensitive information embedded within the graph structure.
AI review
This work introduces Heer Poison, a novel protocol for achieving node-level differential privacy in Graph Neural Networks, solving a complex problem where traditional DP-SGD fails. The research also delivers a critical impossibility result for prior private node embedding approaches, guiding future efforts away from dead ends. Essential for anyone deploying GNNs on sensitive data.