About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Security and Privacy in Communication Networks. 19th EAI International Conference, SecureComm 2023, Hong Kong, China, October 19-21, 2023, Proceedings, Part I

Research Article

Model Inversion Attacks on Homogeneous and Heterogeneous Graph Neural Networks

Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-031-64948-6_7,
        author={Renyang Liu and Wei Zhou and Jinhong Zhang and Xiaoyuan Liu and Peiyuan Si and Haoran Li},
        title={Model Inversion Attacks on Homogeneous and Heterogeneous Graph Neural Networks},
        proceedings={Security and Privacy in Communication Networks. 19th EAI International Conference, SecureComm 2023, Hong Kong, China, October 19-21, 2023, Proceedings, Part I},
        proceedings_a={SECURECOMM},
        year={2024},
        month={10},
        keywords={Model Inversion Attack Adversarial Attack Graph Neural Network Graph Representation Learning Network Communication},
        doi={10.1007/978-3-031-64948-6_7}
    }
    
  • Renyang Liu
    Wei Zhou
    Jinhong Zhang
    Xiaoyuan Liu
    Peiyuan Si
    Haoran Li
    Year: 2024
    Model Inversion Attacks on Homogeneous and Heterogeneous Graph Neural Networks
    SECURECOMM
    Springer
    DOI: 10.1007/978-3-031-64948-6_7
Renyang Liu1, Wei Zhou1, Jinhong Zhang1, Xiaoyuan Liu2, Peiyuan Si, Haoran Li1,*
  • 1: Yunnan University
  • 2: University of Electronic Science and Technology of China
*Contact email: lihaoran@mail.ynu.edu.cn

Abstract

Recently, Graph Neural Networks (GNNs), including Homogeneous Graph Neural Networks (HomoGNNs) and Heterogeneous Graph Neural Networks (HeteGNNs), have made remarkable progress in many physical scenarios, especially in communication applications. Despite achieving great success, the privacy issue of such models has also received considerable attention. Previous studies have shown that given a well-fitted target GNN, the attacker can reconstruct the sensitive training graph of this model via model inversion attacks, leading to significant privacy worries for the AI service provider. We advocate that the vulnerability comes from the target GNN itself and the prior knowledge about the shared properties in real-world graphs. Inspired by this, we propose a novel model inversion attack method on HomoGNNs and HeteGNNs, namely HomoGMI and HeteGMI. Specifically, HomoGMI and HeteGMI are gradient-descent-based optimization methods that aim to maximize the cross-entropy loss on the target GNN and the(1^{st})and(2^{nd})-order proximities on the reconstructed graph. Notably, to the best of our knowledge, HeteGMI is the first attempt to perform model inversion attacks on HeteGNNs. Extensive experiments on multiple benchmarks demonstrate that the proposed method can achieve better performance than the competitors.

Keywords
Model Inversion Attack Adversarial Attack Graph Neural Network Graph Representation Learning Network Communication
Published
2024-10-13
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-031-64948-6_7
Copyright © 2023–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL