
Research Article
Classify Me Correctly if You Can: Evaluating Adversarial Machine Learning Threats in NIDS
@INPROCEEDINGS{10.1007/978-3-031-64948-6_1, author={Neea Rusch and Asma Jodeiri Akbarfam and Hoda Maleki and Gagan Agrawal and Gokila Dorai}, title={Classify Me Correctly if You Can: Evaluating Adversarial Machine Learning Threats in NIDS}, proceedings={Security and Privacy in Communication Networks. 19th EAI International Conference, SecureComm 2023, Hong Kong, China, October 19-21, 2023, Proceedings, Part I}, proceedings_a={SECURECOMM}, year={2024}, month={10}, keywords={Adversarial Machine Learning Network Security Network Intrusion Detection Systems Evasion Attacks Network Traffic Analysis}, doi={10.1007/978-3-031-64948-6_1} }
- Neea Rusch
Asma Jodeiri Akbarfam
Hoda Maleki
Gagan Agrawal
Gokila Dorai
Year: 2024
Classify Me Correctly if You Can: Evaluating Adversarial Machine Learning Threats in NIDS
SECURECOMM
Springer
DOI: 10.1007/978-3-031-64948-6_1
Abstract
Network intrusion detection systems (NIDS) are increasingly developed using machine learning (ML) techniques. However, incorporating ML into NIDS introduces a new vulnerability: the threats and limitations arising from adversarial machine learning (AML) attacks. Specific to this application domain, AML could enable an attacker to disguise incoming malicious packets and fool a NIDS to classify them as benign. Although AML has been researched actively in other domains, assessing its impact in networks remains an outstanding challenge, especially since network protocols pose aconstrained domainfor adversarial packet generation. More specifically, there is a need to experiment with the latest advances in AML attacks – usually developed for an unconstrained domain – on such domains. This paper presents a novel approach to this problem, where a variety of attacks could still be applied and correctly evaluated in a constrained domain. We show an implementation of this approach for NIDS, by developing an adversarial packet validator for different network protocols. By conducting extensive experiments using multiple data sets, ML models, and attacks, we show how our approach can bridge the gap between progress in AML and a constrained domain like NIDS. Evaluation enabled by our approach and its implementation suggests that black-box evasion attacks continue to be a threat to NIDS, despite many constraints offered by this domain.