About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Security and Privacy in Communication Networks. 18th EAI International Conference, SecureComm 2022, Virtual Event, October 2022, Proceedings

Research Article

Adversary for Social Good: Leveraging Attribute-Obfuscating Attack to Protect User Privacy on Social Networks

Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-031-25538-0_37,
        author={Xiaoting Li and Lingwei Chen and Dinghao Wu},
        title={Adversary for Social Good: Leveraging Attribute-Obfuscating Attack to Protect User Privacy on Social Networks},
        proceedings={Security and Privacy in Communication Networks. 18th EAI International Conference, SecureComm 2022, Virtual Event, October 2022, Proceedings},
        proceedings_a={SECURECOMM},
        year={2023},
        month={2},
        keywords={Attribute privacy Inference attack Social networks Graph adversarial attack Attribute obfuscation},
        doi={10.1007/978-3-031-25538-0_37}
    }
    
  • Xiaoting Li
    Lingwei Chen
    Dinghao Wu
    Year: 2023
    Adversary for Social Good: Leveraging Attribute-Obfuscating Attack to Protect User Privacy on Social Networks
    SECURECOMM
    Springer
    DOI: 10.1007/978-3-031-25538-0_37
Xiaoting Li1,*, Lingwei Chen2, Dinghao Wu3
  • 1: Visa Research
  • 2: Wright State University
  • 3: Pennsylvania State University
*Contact email: xiaotili@visa.com

Abstract

As social networks become indispensable for people’s daily lives, inference attacks pose significant threat to users’ privacy where attackers can infiltrate users’ information and infer their private attributes. In particular, social networks are represented as graph-structured data, maintaining rich user activities and complex relationships among them. This enables attackers to deploy state-of-the-art graph neural networks (GNNs) to automate attribute inference attacks for users’ privacy disclosure. To address this challenge, in this paper, we leverage the vulnerability of GNNs to adversarial attacks, and propose a new graph adversarial method, called Attribute-Obfuscating Attack (AttrOBF) to mislead GNNs into misclassification and thus protect user attribute privacy against GNN-based inference attacks on social networks. Different from the prior attacks using perturbations on graph structure or node features, AttrOBF provides a more practical formulation by obfuscating optimal training user attribute values, and also advances the attribute obfuscation by solving the unavailability issue of test attribute annotations, black-box setting, bi-level optimization, and non-differentiable obfuscating operation. We demonstrate the effectiveness of AttrOBF on user attribute obfuscation by extensive experiments over three real-world social network datasets. We believe our work yields great potential of applying adversarial attacks to attribute protection on social networks.

Keywords
Attribute privacy Inference attack Social networks Graph adversarial attack Attribute obfuscation
Published
2023-02-04
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-031-25538-0_37
Copyright © 2022–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL