About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Proceedings of the 4th International Conference on Information Technology, Civil Innovation, Science, and Management, ICITSM 2025, 28-29 April 2025, Tiruchengode, Tamil Nadu, India, Part I

Research Article

Comparative Analysis of Deep Learning Models for Detecting Deepfake Audio Using MobileNet and Explainable AI

Download12 downloads
Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.4108/eai.28-4-2025.2357808,
        author={S  Harish Kumar and Ashok  Dasari and S  Divya Sree and K  Chandu and R  Arun Kumar},
        title={Comparative Analysis of Deep Learning Models for Detecting Deepfake Audio Using MobileNet and Explainable AI},
        proceedings={Proceedings of the 4th International Conference on Information Technology, Civil Innovation, Science, and Management, ICITSM 2025, 28-29 April 2025, Tiruchengode, Tamil Nadu, India, Part I},
        publisher={EAI},
        proceedings_a={ICITSM PART I},
        year={2025},
        month={10},
        keywords={deepfake audio cybersecurity deep learning explainable ai hybrid model adversarial robustness transparency},
        doi={10.4108/eai.28-4-2025.2357808}
    }
    
  • S Harish Kumar
    Ashok Dasari
    S Divya Sree
    K Chandu
    R Arun Kumar
    Year: 2025
    Comparative Analysis of Deep Learning Models for Detecting Deepfake Audio Using MobileNet and Explainable AI
    ICITSM PART I
    EAI
    DOI: 10.4108/eai.28-4-2025.2357808
S Harish Kumar1,*, Ashok Dasari1, S Divya Sree1, K Chandu1, R Arun Kumar1
  • 1: Madanapalle Institute of Technology & Science, India
*Contact email: 21691A2842@mits.ac.in

Abstract

Deepfake audio, which has the creation of highly realistic synthetic voices, has become a major cybersecurity concern. It can be used for misinformation, fraud, and unauthorized access, making accurate detection crucial. This paper presents a hybrid deep learning approach that improves both the accuracy and interpretability of deep-fake audio detection. In this study the model integrates CNNs, RNNs, and transformers to extract and analyze features from audio files effectively. To ensure transparency in decision-making, we use XAI techniques such as SHAP, LIME, and Grad-CAM to highlight the key factors influencing predictions. Our experimental results demonstrate high detection accuracy, resilience against adversarial attacks, and improved trustworthiness of model decisions. This research contributes to strengthening cybersecurity defenses by making deepfake detection both reliable and interpretable.

Keywords
deepfake audio, cybersecurity, deep learning, explainable ai, hybrid model, adversarial robustness, transparency
Published
2025-10-13
Publisher
EAI
http://dx.doi.org/10.4108/eai.28-4-2025.2357808
Copyright © 2025–2025 EAI
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL