About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Cloud Computing. 10th EAI International Conference, CloudComp 2020, Qufu, China, December 11-12, 2020, Proceedings

Research Article

IAS-BERT: An Information Gain Association Vector Semi-supervised BERT Model for Sentiment Analysis

Download(Requires a free EAI acccount)
4 downloads
Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-030-69992-5_3,
        author={Linkun Zhang and Yuxia Lei and Zhengyan Wang},
        title={IAS-BERT: An Information Gain Association Vector Semi-supervised BERT Model for Sentiment Analysis},
        proceedings={Cloud Computing. 10th EAI International Conference, CloudComp 2020, Qufu, China, December 11-12, 2020, Proceedings},
        proceedings_a={CLOUDCOMP},
        year={2021},
        month={2},
        keywords={Information gain Semi-supervised Local feature},
        doi={10.1007/978-3-030-69992-5_3}
    }
    
  • Linkun Zhang
    Yuxia Lei
    Zhengyan Wang
    Year: 2021
    IAS-BERT: An Information Gain Association Vector Semi-supervised BERT Model for Sentiment Analysis
    CLOUDCOMP
    Springer
    DOI: 10.1007/978-3-030-69992-5_3
Linkun Zhang1,*, Yuxia Lei1, Zhengyan Wang1
  • 1: Qufu Normal University, Rizhao
*Contact email: ilinkzhang@gmail.com

Abstract

With the popularity of large-scale corpora, statistics-based models have become mainstream model in Natural Language Processing (NLP). The Bidirectional Encoder Representations from Transformers (BERT), as one of those models, has achieved excellent results in various tasks of NLP since its emergence. But it still has shortcomings, such as poor capability of extracting local features and exploding of training gradients. After analyzing the shortcomings of BERT, this paper proposed an Information-gain Association Vector Semi-supervised Bidirectional Encoder Representations from Transformers (IAS-BERT) model, which improves the capability of capturing local features. Considering the influence of feature's polarity to overall sentiment and association between two word-embeddings, we use information gain on the training corpus. And then, the information gain results are used as an annotation of training corpus to generate a new word embedding. At the same time, we use forward-matching to optimize the computational overhead of IAS-BERT. We experiment the model on dataset of sentiment analysis, and it have achieved good results.

Keywords
Information gain Semi-supervised Local feature
Published
2021-02-13
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-030-69992-5_3
Copyright © 2020–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL