About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Collaborative Computing: Networking, Applications and Worksharing. 19th EAI International Conference, CollaborateCom 2023, Corfu Island, Greece, October 4-6, 2023, Proceedings, Part II

Research Article

Structural Adversarial Attack for Code Representation Models

Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-031-54528-3_22,
        author={Yuxin Zhang and Ruoting Wu and Jie Liao and Liang Chen},
        title={Structural Adversarial Attack for Code Representation Models},
        proceedings={Collaborative Computing: Networking, Applications and Worksharing. 19th EAI International Conference, CollaborateCom 2023, Corfu Island, Greece, October 4-6, 2023, Proceedings, Part II},
        proceedings_a={COLLABORATECOM PART 2},
        year={2024},
        month={2},
        keywords={Code Intelligence Model Robustness Code Representation Model Adversarial Attack},
        doi={10.1007/978-3-031-54528-3_22}
    }
    
  • Yuxin Zhang
    Ruoting Wu
    Jie Liao
    Liang Chen
    Year: 2024
    Structural Adversarial Attack for Code Representation Models
    COLLABORATECOM PART 2
    Springer
    DOI: 10.1007/978-3-031-54528-3_22
Yuxin Zhang1, Ruoting Wu1, Jie Liao1, Liang Chen1,*
  • 1: School of Computer Science
*Contact email: chenliang6@mail.sysu.edu.cn

Abstract

As code intelligence and collaborative computing advances, code representation models (CRMs) have demonstrated exceptional performance in tasks such as code prediction and collaborative code development by leveraging distributed computing resources and shared datasets. Nonetheless, CRMs are often considered unreliable due to their vulnerability to adversarial attacks, failing to make correct predictions when faced with inputs containing perturbations. Several adversarial attack methods have been proposed to evaluate the robustness of CRMs and ensure their reliable in application. However, these methods rely primarily on code’s textual features, without fully exploiting its crucial structural features. To address this limitation, we propose STRUCK, a novel adversarial attack method that thoroughly exploits code’s structural features. The key idea of STRUCK lies in integrating multiple global and local perturbation methods and effectively selecting them by leveraging the structural features of the input code during the generation of adversarial examples for CRMs. We conduct comprehensive evaluations of seven basic or advanced CRMs using two prevalent code classification tasks, demonstrating STRUCK’s effectiveness, efficiency, and imperceptibility. Finally, we show that STRUCK enables a more precise assessment of CRMs’ robustness and increases their resistance to structural attacks through adversarial training.

Keywords
Code Intelligence Model Robustness Code Representation Model Adversarial Attack
Published
2024-02-23
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-031-54528-3_22
Copyright © 2023–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL