
Editorial
Knowledge Graph Fusion for Cross-Modal Semantic Communication
@ARTICLE{10.4108/eetsis.9216, author={Yanrong Yang and Tianxiang Zhong and Mengting Chen}, title={Knowledge Graph Fusion for Cross-Modal Semantic Communication}, journal={EAI Endorsed Transactions on Scalable Information Systems}, volume={12}, number={6}, publisher={EAI}, journal_a={SIS}, year={2025}, month={12}, keywords={Knowledge graph, cross-modal, semantic communication, performance evaluation}, doi={10.4108/eetsis.9216} }- Yanrong Yang
Tianxiang Zhong
Mengting Chen
Year: 2025
Knowledge Graph Fusion for Cross-Modal Semantic Communication
SIS
EAI
DOI: 10.4108/eetsis.9216
Abstract
This paper proposes a knowledge graph-enhanced multi-source fusion (KG-MSF) scheme, a novel cross-modal semantic communication system to robustly fuse visual and textual data for tasks such as visual question answering (VQA) over wireless channels. The proposed KG-MSF scheme integrates knowledge graph reasoning into a multi-stage fusion and encoding pipeline, utilizing bidirectional cross attention between modalities and structured semantic triplets to enhance semantic preservation and resilience to channel impairments. Specifically, image objects and question tokens are first aligned via cross-modal attention, then enriched with shallow and deep semantic triplets extracted through knowledge graphs, which are subsequently fused and transmitted using joint source-channel coding. Extensive simulation results are provided to demonstrate that the proposed KG-MSF scheme significantly outperforms the competing ones under both AWGN and Rayleigh fading channels, indicating KG-MSF’s superior semantic robustness and efficient cross-modal reasoning in wireless environments.
Copyright © 2025 Yanrong Yang et al., licensed to EAI. This is an open access article distributed under the terms of the Creative Commons Attribution license, which permits unlimited use, distribution and reproduction in any medium so long as the original work is properly cited.


