About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Cognitive Computing and Cyber Physical Systems. 5th EAI International Conference, IC4S 2024, Bhimavaram, India, April 5–7, 2024, Proceedings, Part-I

Research Article

An Efficient Sentiment Classification Model Using Fusion of BERT and Deep Learning RNN Variants

Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-031-77075-3_22,
        author={Raja Rao PBV and M. Prasad and Kiran Sree Pokkuluri and P. Srikanth and Srinivasa Rao Dangeti and B. Venkateswara Rao},
        title={An Efficient Sentiment Classification Model Using Fusion of BERT and Deep Learning RNN Variants},
        proceedings={Cognitive Computing and Cyber Physical Systems. 5th EAI International Conference, IC4S 2024, Bhimavaram, India, April 5--7, 2024, Proceedings, Part-I},
        proceedings_a={IC4S},
        year={2025},
        month={2},
        keywords={Twitter Sentiment Kaggle BERT RNN LSTM GRU},
        doi={10.1007/978-3-031-77075-3_22}
    }
    
  • Raja Rao PBV
    M. Prasad
    Kiran Sree Pokkuluri
    P. Srikanth
    Srinivasa Rao Dangeti
    B. Venkateswara Rao
    Year: 2025
    An Efficient Sentiment Classification Model Using Fusion of BERT and Deep Learning RNN Variants
    IC4S
    Springer
    DOI: 10.1007/978-3-031-77075-3_22
Raja Rao PBV1,*, M. Prasad1, Kiran Sree Pokkuluri1, P. Srikanth1, Srinivasa Rao Dangeti1, B. Venkateswara Rao2
  • 1: Shri Vishnu Engineering College for Women(A)
  • 2: B V Raju Institute of Technology, Narsapur
*Contact email: rajaraopbv@gmail.com

Abstract

This paper explores the fusion of BERT with various recurrent neural network (RNN) architectures, including simple RNN, LSTM, and GRU, for Twitter sentiment classification. For experimentation, we used a Kaggle Sentiment 140 dataset with positive and negative sentiments. Initially, three DL models (RNN, LSTM, and GRU) were applied for sentiment analysis and achieved accuracies of 86%, 91%, and 90%, respectively. By integrating BERT with these RNN variants, the proposed method enhanced the performance of sentiment analysis on Twitter data. The methodology involves tokenizing and encoding Twitter messages using the BERT tokenizer to obtain contextual embeddings. These embeddings are then fed into RNN-based classifiers, including Simple RNN, LSTM, and GRU, to capture sequential dependencies within the text. Later, a hybrid model with BERT and RNN variants was applied. The hybrid model combining BERT with LSTM achieves the highest accuracy at 93.5%, followed by BERT + GRU with 91% and BERT + RNN with 90%. These findings underscore the effectiveness of leveraging BERT in conjunction with LSTM and GRU architectures for Twitter sentiment classification, offering promising avenues for further research and real-world applications.

Keywords
Twitter Sentiment Kaggle BERT RNN LSTM GRU
Published
2025-02-09
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-031-77075-3_22
Copyright © 2024–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL