About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Edge Computing and IoT: Systems, Management and Security. Third EAI International Conference, ICECI 2022, Virtual Event, December 13-14, 2022, Proceedings

Research Article

Demons Hidden in the Light: Unrestricted Adversarial Illumination Attacks

Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-031-28990-3_9,
        author={Kaibo Wang and Yanjiao Chen and Wenyuan Xu},
        title={Demons Hidden in the Light: Unrestricted Adversarial Illumination Attacks},
        proceedings={Edge Computing and IoT: Systems, Management and Security. Third EAI International Conference, ICECI 2022, Virtual Event, December 13-14, 2022, Proceedings},
        proceedings_a={ICECI},
        year={2023},
        month={3},
        keywords={Unrestricted adversarial attacks Adversarial illumination},
        doi={10.1007/978-3-031-28990-3_9}
    }
    
  • Kaibo Wang
    Yanjiao Chen
    Wenyuan Xu
    Year: 2023
    Demons Hidden in the Light: Unrestricted Adversarial Illumination Attacks
    ICECI
    Springer
    DOI: 10.1007/978-3-031-28990-3_9
Kaibo Wang1, Yanjiao Chen1,*, Wenyuan Xu1
  • 1: College of Electrical Engineering
*Contact email: chenyanjiao@zju.edu.cn

Abstract

As deep learning-based computer vision is widely used in IoT devices, it is especially critical to ensure its security. Among the attacks against deep neural networks, adversarial attacks are a stealthy means of attack, which can mislead model decisions during the testing phase. Therefore, the exploration of adversarial attacks can help to understand the vulnerability of models in advance and make targeted defense.

Existing unrestricted adversarial attacks beyond the(\ell _p)norm often require additional models to be both adversarial and imperceptible, which leads to a high computational cost and task-specific design. Inspired by the observation that models exhibit unexpected vulnerability to changes in illumination, we develop Adversarial Illumination Attack (AIA), an unrestricted adversarial attack that imposes large but imperceptible alterations to the image.

The core of the attack lies in simulating adversarial illumination through Planckian jitter, of which the effectiveness comes from a causal chain where the attacker misleads the model by manipulating the confusion factor. We propose an efficient approach to generate adversarial samples without additional models by image gradient regularization. We validate the effectiveness of adversarial illumination in the face of black-box models, data preprocessing, and adversarially trained models through extensive experiments. Experiment results confirm that AIA can be both a lightweight unrestricted attack and a plug-in to boost the effectiveness of other attacks.

Keywords
Unrestricted adversarial attacks Adversarial illumination
Published
2023-03-31
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-031-28990-3_9
Copyright © 2022–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL