About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
sis 25(1):

Research Article

MFUIE: A Fake News Detection Model Based on Multimodal Features and User Information Enhancement

Download81 downloads
Cite
BibTeX Plain Text
  • @ARTICLE{10.4108/eetsis.7517,
        author={Xiulan Hao and Wenjing Xu and Xu  Huang and Zhenzhen Sheng and Huayun Yan},
        title={MFUIE: A Fake News Detection Model Based on Multimodal Features and User Information Enhancement},
        journal={EAI Endorsed Transactions on Scalable Information Systems},
        volume={12},
        number={1},
        publisher={EAI},
        journal_a={SIS},
        year={2025},
        month={4},
        keywords={Multimodal Learning, fake news, Deep learning},
        doi={10.4108/eetsis.7517}
    }
    
  • Xiulan Hao
    Wenjing Xu
    Xu Huang
    Zhenzhen Sheng
    Huayun Yan
    Year: 2025
    MFUIE: A Fake News Detection Model Based on Multimodal Features and User Information Enhancement
    SIS
    EAI
    DOI: 10.4108/eetsis.7517
Xiulan Hao1, Wenjing Xu1, Xu Huang2,*, Zhenzhen Sheng1, Huayun Yan1
  • 1: Huzhou University
  • 2: Huzhou College
*Contact email: hx@zjhzu.edu.cn

Abstract

INTRODUCTION: Deep learning algorithms have advantages in extracting key features for detecting fake news. However, the existing multi-modal fake news detection models only fuse the visual and textual features after the encoder, failing to effectively utilize the multi-modal contextual relationships and resulting in insufficient feature fusion. Moreover, most fake news detection algorithms focus on mining news content and overlook the users' preferences whether to spread fake news. OBJECTIVES: The model uses the multi-modal context relationship when extracting model features, and combines with user features to assist in mining multi-modal information to improve the performance of fake news detection. METHODS: A fake news detection model called MFUIE (Multimodal Feature and User Information Enhancement) is proposed, which utilizes multi-modal features and user information enhancement. Firstly, for news content, we utilize the pre-trained language model BERT to encode sentences. At the same time, we use the Swin Transformer model as the main framework and introduce textual features during the early visual feature encoding to enhance semantic interactions. Additionally, we employ InceptionNetV3 as the image pattern analyser. Secondly, for user's historical posts, we use the same model as the news text to encode them, and introduce GAT (Graph Attention Network) to enhance information interaction between post nodes, capturing user-specific features. Finally, we fuse the obtained user features with the multi-modal features and validate the performance of the model. RESULTS: The proposed model's performance is compared with those of existing methods. MFUIE model achieves an accuracy of 0.926 and 0.935 on the Weibo dataset and Weibo-21 dataset, respectively. F1 on Weibo is 0.926, 0.017 greater than SOAT model BRM; while F1 on Weibo-21 is 0.935, 0.009 greater than that of BRM. CONCLUSION: Experimental results demonstrate that MFUIE can improve the fake news recognition in some degree.

Keywords
Multimodal Learning, fake news, Deep learning
Received
2025-04-11
Accepted
2025-04-11
Published
2025-04-11
Publisher
EAI
http://dx.doi.org/10.4108/eetsis.7517

Copyright © 2024 Xiulan Hao et al., licensed to EAI. This is an open access article distributed under the terms of the CC BY-NC-SA 4.0, which permits copying, redistributing, remixing, transformation, and building upon the material in any medium so long as the original work is properly cited.

EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL