About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Collaborative Computing: Networking, Applications and Worksharing. 18th EAI International Conference, CollaborateCom 2022, Hangzhou, China, October 15-16, 2022, Proceedings, Part II

Research Article

Evading Encrypted Traffic Classifiers by Transferable Adversarial Traffic

Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-031-24386-8_9,
        author={Hanwu Sun and Chengwei Peng and Yafei Sang and Shuhao Li and Yongzheng Zhang and Yujia Zhu},
        title={Evading Encrypted Traffic Classifiers by Transferable Adversarial Traffic},
        proceedings={Collaborative Computing: Networking, Applications and Worksharing. 18th EAI International Conference, CollaborateCom 2022, Hangzhou, China, October 15-16, 2022, Proceedings, Part II},
        proceedings_a={COLLABORATECOM PART 2},
        year={2023},
        month={1},
        keywords={Transferable adversarial traffic Encrypted traffic classifiers Adversarial example attack Black-box attack},
        doi={10.1007/978-3-031-24386-8_9}
    }
    
  • Hanwu Sun
    Chengwei Peng
    Yafei Sang
    Shuhao Li
    Yongzheng Zhang
    Yujia Zhu
    Year: 2023
    Evading Encrypted Traffic Classifiers by Transferable Adversarial Traffic
    COLLABORATECOM PART 2
    Springer
    DOI: 10.1007/978-3-031-24386-8_9
Hanwu Sun1, Chengwei Peng, Yafei Sang1,*, Shuhao Li1, Yongzheng Zhang, Yujia Zhu1
  • 1: Institute of Information Engineering
*Contact email: sangyafei@iie.ac.cn

Abstract

Machine learning algorithms have been widely leveraged in traffic classification tasks to overcome the challenges brought by the enormous encrypted traffic. On the contrary, ML-based classifiers introduce adversarial example attacks, which can fool the classifiers into giving wrong outputs with elaborately designed examples. Some adversarial attacks have been proposed to evaluate and improve the robustness of ML-based traffic classifiers. Unfortunately, it is impractical for these attacks to assume that the adversary can run the target classifiers locally (white-box). Even some GAN-based black-box attacks still require the target classifiers to act as discriminators. We fill the gap by proposing FAT (We use FAT rather than TAT to imporove readability.), a novel black-box adversarial traffic attack framework, which generates the transFerableAdversarialTraffic to evade ML-based encrypted traffic classifiers. The key novelty of FAT is two-fold: i) FAT does not assume that the adversary can obtain the target classifier. Specifically, FAT builds proxy classifiers to mimic the target classifiers and generates transferable adversarial traffic to misclassify the target classifiers. ii) FAT makes adversarial traffic attacks more practical by translating adversarial features into traffic. We use two datasets, CICIDS-2017 and MTA, to evaluate the effectiveness of FAT against seven common ML-based classifiers. The experimental results show that FAT achieves an average evasion detection rate (EDR) of 86.7%, which is higher than the state-of-the-art black-box attack by 34.4%.

Keywords
Transferable adversarial traffic Encrypted traffic classifiers Adversarial example attack Black-box attack
Published
2023-01-25
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-031-24386-8_9
Copyright © 2022–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL