Security and Privacy in Communication Networks. 17th EAI International Conference, SecureComm 2021, Virtual Event, September 6–9, 2021, Proceedings, Part I

Research Article

Explanation-Guided Diagnosis of Machine Learning Evasion Attacks

  • @INPROCEEDINGS{10.1007/978-3-030-90019-9_11,
        author={Abderrahmen Amich and Birhanu Eshete},
        title={Explanation-Guided Diagnosis of Machine Learning Evasion Attacks},
        proceedings={Security and Privacy in Communication Networks. 17th EAI International Conference, SecureComm 2021, Virtual Event, September 6--9, 2021, Proceedings, Part I},
        proceedings_a={SECURECOMM},
        year={2021},
        month={11},
        keywords={Machine learning evasion Explainable machine learning},
        doi={10.1007/978-3-030-90019-9_11}
    }
    
  • Abderrahmen Amich
    Birhanu Eshete
    Year: 2021
    Explanation-Guided Diagnosis of Machine Learning Evasion Attacks
    SECURECOMM
    Springer
    DOI: 10.1007/978-3-030-90019-9_11
Abderrahmen Amich1, Birhanu Eshete1
  • 1: University of Michigan

Abstract

Machine Learning (ML) models are susceptible to evasion attacks. Evasion accuracy is typically assessed using aggregate evasion rate, and it is an open question whether aggregate evasion rate enables feature-level diagnosis on the effect of adversarial perturbations on evasive predictions. In this paper, we introduce a novel framework that harnesses explainable ML methods to guide high-fidelity assessment of ML evasion attacks. Our framework enables explanation-guided correlation analysis between pre-evasion perturbations and post-evasion explanations. Towards systematic assessment of ML evasion attacks, we propose and evaluate a novel suite of model-agnostic metrics for sample-level and dataset-level correlation analysis. Using malware and image classifiers, we conduct comprehensive evaluations across diverse model architectures and complementary feature representations. Our explanation-guided correlation analysis reveals correlation gaps between adversarial samples and the corresponding perturbations performed on them. Using a case study on explanation-guided evasion, we show the broader usage of our methodology for assessing robustness of ML models .