About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Digital Forensics and Cyber Crime. 14th EAI International Conference, ICDF2C 2023, New York City, NY, USA, November 30, 2023, Proceedings, Part I

Research Article

CCBA: Code Poisoning-Based Clean-Label Covert Backdoor Attack Against DNNs

Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-031-56580-9_11,
        author={Xubo Yang and Linsen Li and Cunqing Hua and Changhao Yao},
        title={CCBA: Code Poisoning-Based Clean-Label Covert Backdoor Attack Against DNNs},
        proceedings={Digital Forensics and Cyber Crime. 14th EAI International Conference, ICDF2C 2023, New York City, NY, USA, November 30, 2023, Proceedings, Part I},
        proceedings_a={ICDF2C},
        year={2024},
        month={4},
        keywords={backdoor attack deep learning code poisoning natural language processing graph neural network},
        doi={10.1007/978-3-031-56580-9_11}
    }
    
  • Xubo Yang
    Linsen Li
    Cunqing Hua
    Changhao Yao
    Year: 2024
    CCBA: Code Poisoning-Based Clean-Label Covert Backdoor Attack Against DNNs
    ICDF2C
    Springer
    DOI: 10.1007/978-3-031-56580-9_11
Xubo Yang, Linsen Li,*, Cunqing Hua, Changhao Yao
    *Contact email: lsli@sjtu.edu.cn

    Abstract

    Deep neural networks have been shown to be vulnerable to backdoor attacks, and currently, almost all attacks involve inserting backdoors into models through data poisoning, which requires the attacker to have access to higher-level model training and can be easily exposed. However, vulnerabilities in code management for deep learning training make the code itself an extremely susceptible target for attacks. based on this, we propose a novel form of backdoor attack called Code Poisoning-based Clean-Label Covert Backdoor Attack (CCBA), which dynamically modifies the training data by manipulating only a small fraction of the code to inject a backdoor. This attack imposes a negligible burden on the training process, while still achieving strong performance and maintaining stealth. We not only validate the feasibility and effectiveness of CCBA in deep neural networks but also extend it successfully to graph neural networks and natural language processing, demonstrating promising results.

    Keywords
    backdoor attack deep learning code poisoning natural language processing graph neural network
    Published
    2024-04-03
    Appears in
    SpringerLink
    http://dx.doi.org/10.1007/978-3-031-56580-9_11
    Copyright © 2023–2025 ICST
    EBSCOProQuestDBLPDOAJPortico
    EAI Logo

    About EAI

    • Who We Are
    • Leadership
    • Research Areas
    • Partners
    • Media Center

    Community

    • Membership
    • Conference
    • Recognition
    • Sponsor Us

    Publish with EAI

    • Publishing
    • Journals
    • Proceedings
    • Books
    • EUDL