
Research Article
Adversary for Social Good: Leveraging Attribute-Obfuscating Attack to Protect User Privacy on Social Networks
@INPROCEEDINGS{10.1007/978-3-031-25538-0_37, author={Xiaoting Li and Lingwei Chen and Dinghao Wu}, title={Adversary for Social Good: Leveraging Attribute-Obfuscating Attack to Protect User Privacy on Social Networks}, proceedings={Security and Privacy in Communication Networks. 18th EAI International Conference, SecureComm 2022, Virtual Event, October 2022, Proceedings}, proceedings_a={SECURECOMM}, year={2023}, month={2}, keywords={Attribute privacy Inference attack Social networks Graph adversarial attack Attribute obfuscation}, doi={10.1007/978-3-031-25538-0_37} }
- Xiaoting Li
Lingwei Chen
Dinghao Wu
Year: 2023
Adversary for Social Good: Leveraging Attribute-Obfuscating Attack to Protect User Privacy on Social Networks
SECURECOMM
Springer
DOI: 10.1007/978-3-031-25538-0_37
Abstract
As social networks become indispensable for people’s daily lives, inference attacks pose significant threat to users’ privacy where attackers can infiltrate users’ information and infer their private attributes. In particular, social networks are represented as graph-structured data, maintaining rich user activities and complex relationships among them. This enables attackers to deploy state-of-the-art graph neural networks (GNNs) to automate attribute inference attacks for users’ privacy disclosure. To address this challenge, in this paper, we leverage the vulnerability of GNNs to adversarial attacks, and propose a new graph adversarial method, called Attribute-Obfuscating Attack (AttrOBF) to mislead GNNs into misclassification and thus protect user attribute privacy against GNN-based inference attacks on social networks. Different from the prior attacks using perturbations on graph structure or node features, AttrOBF provides a more practical formulation by obfuscating optimal training user attribute values, and also advances the attribute obfuscation by solving the unavailability issue of test attribute annotations, black-box setting, bi-level optimization, and non-differentiable obfuscating operation. We demonstrate the effectiveness of AttrOBF on user attribute obfuscation by extensive experiments over three real-world social network datasets. We believe our work yields great potential of applying adversarial attacks to attribute protection on social networks.