
Research Article
Deep Robust Neural Networks Inspired by Human Cognitive Bias Against Transfer-based Attacks
@INPROCEEDINGS{10.1007/978-3-031-29126-5_6, author={Yuuki Ogasawara and Masao Kubo and Hiroshi Sato}, title={Deep Robust Neural Networks Inspired by Human Cognitive Bias Against Transfer-based Attacks}, proceedings={Artificial Intelligence for Communications and Networks. 4th EAI International Conference, AICON 2022, Hiroshima, Japan, November 30 - December 1, 2022, Proceedings}, proceedings_a={AICON}, year={2023}, month={3}, keywords={Adversarial attacks Adversarial examples Transfer-based attacks Random noise Cognitive bias Neural networks Robustness}, doi={10.1007/978-3-031-29126-5_6} }
- Yuuki Ogasawara
Masao Kubo
Hiroshi Sato
Year: 2023
Deep Robust Neural Networks Inspired by Human Cognitive Bias Against Transfer-based Attacks
AICON
Springer
DOI: 10.1007/978-3-031-29126-5_6
Abstract
In recent years, with the proliferation of cloud services, the threat of Transfer-based attacks, a type of Adversarial attacks, has increased. Adversarial Training is known as an effective defense against this attack, but it has been pointed out that it degrades accuracy against normal data and robustness against random noise(Gaussian noise). To solve these problems, we focus on the human visual function, which has robustness while maintaining high accuracy. The contribution of top-down processing, in which the feedforward signal is overwritten by some bias factor, has been pointed out as the reason for this. From this perspective, we propose a new model based on Neural Networks using human cognitive bias. This is an algorithm that overwrites signals according to human cognitive bias and is expected to reproduce human visual functions. Evaluation experiments on two different datasets suggest that the proposed model is robust against Transfer-based attacks. Furthermore, the proposed model can mitigate the accuracy degradation of the normal data to a limited extent, suggesting that it is robust against random noise.