About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
IoT 24(1):

Research Article

Massively Parallel Evasion Attacks and the Pitfalls of Adversarial Retraining

Download101 downloads
Cite
BibTeX Plain Text
  • @ARTICLE{10.4108/eetiot.6652,
        author={Charles Meyers and Tommy L\o{}fstedt and Erik Elmroth},
        title={Massively Parallel Evasion Attacks and the Pitfalls of Adversarial Retraining},
        journal={EAI Endorsed Transactions on Internet of Things},
        volume={10},
        number={1},
        publisher={EAI},
        journal_a={IOT},
        year={2024},
        month={7},
        keywords={Machine Learning, Support Vector Machines, Trustworthy AI, Anomaly Detection, AI for Cybersecurity},
        doi={10.4108/eetiot.6652}
    }
    
  • Charles Meyers
    Tommy Löfstedt
    Erik Elmroth
    Year: 2024
    Massively Parallel Evasion Attacks and the Pitfalls of Adversarial Retraining
    IOT
    EAI
    DOI: 10.4108/eetiot.6652
Charles Meyers1,*, Tommy Löfstedt1, Erik Elmroth1
  • 1: Umeå University
*Contact email: cmeyers@cs.umu.se

Abstract

Even with widespread adoption of automated anomaly detection in safety-critical areas, both classical and advanced machine learning models are susceptible to first-order evasion attacks that fool models at run-time (e.g. an automated firewall or an anti-virus application). Kernelized support vector machines (KSVMs) are an especially useful model because they combine a complex geometry with low run-time requirements (e.g. when compared to neural networks), acting as a run-time lower bound when compared to contemporary models (e.g. deep neural networks), to provide a cost-efficient way to measure model and attack run-time costs. To properly measure and combat adversaries, we propose a massively parallel projected gradient descent (PGD) evasion attack framework. Through theoretical examinations and experiments carried out using linearly-separable Gaussian normal data, we present (i) a massively parallel naive attack, we show that adversarial retraining is unlikely to be an effective means to combat an attacker even on linearly separable datasets, (ii) a cost effective way of evaluating models defences and attacks, and an extensible code base for doing so, (iii) an inverse relationship between adversarial robustness and benign accuracy, (iv) the lack of a general relationship between attack time and efficacy, and (v) that adversarial retraining increases compute time exponentially while failing to reliably prevent highly-confident false classifications.

Keywords
Machine Learning, Support Vector Machines, Trustworthy AI, Anomaly Detection, AI for Cybersecurity
Accepted
2023-10-18
Published
2024-07-17
Publisher
EAI
http://dx.doi.org/10.4108/eetiot.6652

Copyright © 2024 C. Meyers et al., licensed to EAI. This is an open access article distributed under the terms of the CC BY-NC-SA 4.0, which permits copying, redistributing, remixing, transformation, and building upon the material in any medium so long as the original work is properly cited.

EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL