Research Article
Multi-Modal Solution: Deepfake Detection and the Source Identification
@INPROCEEDINGS{10.4108/eai.15-12-2023.2345299, author={Yahan Zheng and Xu Zhou and Cheng Chen and Jingwen Hu}, title={Multi-Modal Solution: Deepfake Detection and the Source Identification}, proceedings={Proceedings of the 3rd International Conference on Public Management and Big Data Analysis, PMBDA 2023, December 15--17, 2023, Nanjing, China}, publisher={EAI}, proceedings_a={PMBDA}, year={2024}, month={5}, keywords={deepfake detection; multi-modal; mmmu-ba}, doi={10.4108/eai.15-12-2023.2345299} }
- Yahan Zheng
Xu Zhou
Cheng Chen
Jingwen Hu
Year: 2024
Multi-Modal Solution: Deepfake Detection and the Source Identification
PMBDA
EAI
DOI: 10.4108/eai.15-12-2023.2345299
Abstract
Deepfake technology has recently raised significant concerns due to its potential for manipulating and misusing multimedia content. In response to this issue, researchers have been exploring novel approaches for deepfake detection. In this study, we propose a multimodal analysis framework that combines visual, audio, and textual modalities to determine if an unknown video is a fake one and to identify the source identity in manipulated media. By leveraging the complementary information from multiple modalities, our approach aims to enhance the accuracy and robustness of deepfake detection.
Copyright © 2023–2024 EAI