
Research Article
Comparative Analysis of Deep Learning Models for Detecting Deepfake Audio Using MobileNet and Explainable AI
@INPROCEEDINGS{10.4108/eai.28-4-2025.2357808, author={S Harish Kumar and Ashok Dasari and S Divya Sree and K Chandu and R Arun Kumar}, title={Comparative Analysis of Deep Learning Models for Detecting Deepfake Audio Using MobileNet and Explainable AI}, proceedings={Proceedings of the 4th International Conference on Information Technology, Civil Innovation, Science, and Management, ICITSM 2025, 28-29 April 2025, Tiruchengode, Tamil Nadu, India, Part I}, publisher={EAI}, proceedings_a={ICITSM PART I}, year={2025}, month={10}, keywords={deepfake audio cybersecurity deep learning explainable ai hybrid model adversarial robustness transparency}, doi={10.4108/eai.28-4-2025.2357808} }
- S Harish Kumar
Ashok Dasari
S Divya Sree
K Chandu
R Arun Kumar
Year: 2025
Comparative Analysis of Deep Learning Models for Detecting Deepfake Audio Using MobileNet and Explainable AI
ICITSM PART I
EAI
DOI: 10.4108/eai.28-4-2025.2357808
Abstract
Deepfake audio, which has the creation of highly realistic synthetic voices, has become a major cybersecurity concern. It can be used for misinformation, fraud, and unauthorized access, making accurate detection crucial. This paper presents a hybrid deep learning approach that improves both the accuracy and interpretability of deep-fake audio detection. In this study the model integrates CNNs, RNNs, and transformers to extract and analyze features from audio files effectively. To ensure transparency in decision-making, we use XAI techniques such as SHAP, LIME, and Grad-CAM to highlight the key factors influencing predictions. Our experimental results demonstrate high detection accuracy, resilience against adversarial attacks, and improved trustworthiness of model decisions. This research contributes to strengthening cybersecurity defenses by making deepfake detection both reliable and interpretable.