About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
sesa 25(1):

Research Article

Breaking the Loop: Adversarial Attacks on Cognitive-AI Feedback via Neural Signal Manipulation

Download21 downloads
Cite
BibTeX Plain Text
  • @ARTICLE{10.4108/eetss.v9i1.9502,
        author={Dhaya R and kanthavel R},
        title={Breaking the Loop: Adversarial Attacks on Cognitive-AI Feedback via Neural Signal Manipulation},
        journal={EAI Endorsed Transactions on Security and Safety},
        volume={9},
        number={1},
        publisher={EAI},
        journal_a={SESA},
        year={2025},
        month={9},
        keywords={Neuro-adversarial attacks, Brain-Computer Interfaces (BCI) Security, EEG Perturbation, Adversarial Machine Learning, HITL-AI, Cognitive Feedback Loop, Neural Signal Manipulation},
        doi={10.4108/eetss.v9i1.9502}
    }
    
  • Dhaya R
    kanthavel R
    Year: 2025
    Breaking the Loop: Adversarial Attacks on Cognitive-AI Feedback via Neural Signal Manipulation
    SESA
    EAI
    DOI: 10.4108/eetss.v9i1.9502
Dhaya R1, kanthavel R1,*
  • 1: Papua New Guinea University of Technology
*Contact email: radakrishnan.kanthavel@pnguot.ac.pg

Abstract

INTRODUCTION: Brain-Computer Interfaces (BCIs) embedded with Artificial Intelligence (AI) have created powerful closed-loop cognitive systems in the fields of neurorehabilitation, robotics, and assistive technologies. However, these tightly bound systems of human-AI integration expose the system to new security vulnerabilities and adversarial distortions of neural signals. OBJECTIVES: The paper seeks to formally develop and assess neuro-adversarial attacks, a new class of attack vector that targets AI cognitive feedback systems through attacks on electroencephalographic (EEG) signals. The goal of the research was to simulate such attacks, measure the effects, and propose countermeasures. METHODS: Adversarial machine learning (AML) techniques, including Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD), were applied to open EEG datasets using Long Short Term Memory (LSTM), Convolutional Neural Networks (CNN), and Transformer-based models. Closed-loop simulations of BCI-AI systems, including real-time feedback, were conducted, and both the attack vectors and the attacks countermeasure approaches (e.g., VAEs, wavelet denoising, adversarial detectors) were tested. RESULTS: Neuro-adversarial perturbations yielded up to 30% reduction in classification accuracy and over 35% user intent misalignment. Transformer-based models performed relatively better, but overall performance degradation was significant. Defense strategies such as variational autoencoders and real-time adversarial detectors returned classification accuracy to over 80% and reduced successful attacks to below 10%. CONCLUSION: The threat model presented in this paper is a significant addition to the world of neuroscience and AI security. Neuro-adversarial attacks represent a real risk to cognitive-AI systems by misaligning human intent and action with machine response. Mobile layer signal sanitation and detection.

Keywords
Neuro-adversarial attacks, Brain-Computer Interfaces (BCI) Security, EEG Perturbation, Adversarial Machine Learning, HITL-AI, Cognitive Feedback Loop, Neural Signal Manipulation
Received
2025-06-07
Accepted
2025-09-26
Published
2025-09-29
Publisher
EAI
http://dx.doi.org/10.4108/eetss.v9i1.9502

Copyright © 2025 Author et al., licensed to EAI. This is an open-access article distributed under the terms of the CC BY-NC-SA 4.0, which permits copying, redistributing, remixing, transforming, and building upon the material in any medium so long as the original work is properly cited.

EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL