About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Mobile Multimedia Communications. 15th EAI International Conference, MobiMedia 2022, Virtual Event, July 22-24, 2022, Proceedings

Research Article

Event Annotation Enhanced Pre-trained Language Model in Event Extraction

Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-031-23902-1_22,
        author={Qisen Xi and Yizhi Ren and Guohua Wu and Qiuhua Wang and Lifeng Yuan and Zhen Zhang},
        title={Event Annotation Enhanced Pre-trained Language Model in Event Extraction},
        proceedings={Mobile Multimedia Communications. 15th EAI International Conference, MobiMedia 2022, Virtual Event, July 22-24, 2022, Proceedings},
        proceedings_a={MOBIMEDIA},
        year={2023},
        month={2},
        keywords={Event extraction Event annotation Pre-trained language model},
        doi={10.1007/978-3-031-23902-1_22}
    }
    
  • Qisen Xi
    Yizhi Ren
    Guohua Wu
    Qiuhua Wang
    Lifeng Yuan
    Zhen Zhang
    Year: 2023
    Event Annotation Enhanced Pre-trained Language Model in Event Extraction
    MOBIMEDIA
    Springer
    DOI: 10.1007/978-3-031-23902-1_22
Qisen Xi1, Yizhi Ren1, Guohua Wu1, Qiuhua Wang1, Lifeng Yuan1,*, Zhen Zhang1
  • 1: School of Cyberspace, Hangzhou Dianzi University
*Contact email: yuanlifeng@hdu.edu.cn

Abstract

Event extraction is a crucial task that aims to extract event information in texts. Existing methods usually use pre-trained language models to extract events and have achieved state-of-the-art performance. However, these models do not consider the complexity of the event structure and lack the use of event knowledge. To address these problems, we propose a new framework that integrates event annotation into the pre-trained model explicitly, termd as EABERT. Specifically, event annotations are incorporated into the model input to construct the form “[CLS]sentence[SEP]event annotation[SEP]”, which allows the model to encode the semantic relationship between text and event knowledge. To incorporate appropriate event annotations into the model, we further use the bilateral-branch BERT network to train the event type classifier for better accuracy of event annotations. Experiments on the event extraction benchmark dataset (ACE 2005) show that our proposed framework has significantly improved compared to previous methods.

Keywords
Event extraction Event annotation Pre-trained language model
Published
2023-02-01
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-031-23902-1_22
Copyright © 2022–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL