sc 23(2): e2

Research Article

Propaganda Detection And Challenges Managing Smart Cities Information On Social Media

Download111 downloads
  • @ARTICLE{10.4108/eetsc.v7i2.2925,
        author={Pir Noman Ahmad and Khalid Khan},
        title={Propaganda Detection And Challenges Managing Smart Cities Information On Social Media},
        journal={EAI Endorsed Transactions on Smart Cities},
        volume={7},
        number={2},
        publisher={EAI},
        journal_a={SC},
        year={2023},
        month={3},
        keywords={Machine translation, Span, linguistic, neural architectures, BiLSM},
        doi={10.4108/eetsc.v7i2.2925}
    }
    
  • Pir Noman Ahmad
    Khalid Khan
    Year: 2023
    Propaganda Detection And Challenges Managing Smart Cities Information On Social Media
    SC
    EAI
    DOI: 10.4108/eetsc.v7i2.2925
Pir Noman Ahmad1,*, Khalid Khan2
  • 1: School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China
  • 2: Computer Science and Software Engineering, University Of Stirling, UK
*Contact email: ahmadpir40@gmail.com

Abstract

Misinformation, false news, and various forms of propaganda have increased as a consequence of the rapid spread of information on social media. The Covid-19 spread deeply transformed citizens' day-to-day lives due to the overview of new methods of effort and access to facilities based on smart technologies. Social media propagandistic data and high-quality information on smart cities are the most challenging elements of this study. As a result of a natural language processing perspective, we have developed a system that automatically extracts information from bi-lingual sources. This information is either in Urdu or English (Ur or Eng), and we apply machine translation to obtain the target language. We explore different neural architectures and extract linguistic layout and relevant features in the bi-lingual corpus. Moreover, we fine-tune RoBERTa and ensemble BiLSM, CRF and BiRNN model. Our solution uses fine-tuned RoBERTa, a pretrained language model, to perform word-level classification. This paper provides insight into the model's learning abilities by analyzing its attention heads and the model's evaluation results.