Research Article
CLETer: A Character-level Evasion Technique Against Deep Learning DGA Classifiers
@ARTICLE{10.4108/eai.18-2-2021.168723, author={Wanping Liu and Zhoulan Zhang and Cheng Huang and Yong Fang}, title={CLETer: A Character-level Evasion Technique Against Deep Learning DGA Classifiers}, journal={EAI Endorsed Transactions on Security and Safety}, volume={7}, number={24}, publisher={EAI}, journal_a={SESA}, year={2021}, month={2}, keywords={cybersecurity, malware, domain generation algorithms, deep learning, adversarial attack}, doi={10.4108/eai.18-2-2021.168723} }
- Wanping Liu
Zhoulan Zhang
Cheng Huang
Yong Fang
Year: 2021
CLETer: A Character-level Evasion Technique Against Deep Learning DGA Classifiers
SESA
EAI
DOI: 10.4108/eai.18-2-2021.168723
Abstract
The detection of pseudo-random domain names generated by Domain Generation Algorithms (DGAs) is one of the effective ways to find botnets. Study on the vulnerability of deep learning models to adversarial attacks can enhance the robustness of DGA detection mechanism. This paper proposes CLETer, an improved DGA that provides a character-level evasion technique against state-of-the-art DGA classifiers. Based on existing DGA domain names, CLETer can intelligently generate adversarial examples by quantifying the influence of every character to the classification result and then changing the important characters. Those improved domain names can easily evade being detected and show good transferability. The experimental results demonstrate that when modifying only two characters, CLETer can effectively lower the LSTM classifier’s recall from 99.76% to 1.29% and drop the CNN classifier’s recall from 99.36% to 3.64%. It is proved that adversarial retraining is a viable defense strategy to CLETer.
Copyright © 2021 Wanping Liu et al., licensed to EAI. This is an open access article distributed under the terms of the Creative Commons Attribution license, which permits unlimited use, distribution and reproduction in any medium solong as the original work is properly cited.