About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Digital Forensics and Cyber Crime. 14th EAI International Conference, ICDF2C 2023, New York City, NY, USA, November 30, 2023, Proceedings, Part II

Research Article

APTBert: Abstract Generation and Event Extraction from APT Reports

Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-031-56583-0_14,
        author={Chenxin Zhou and Cheng Huang and Yanghao Wang and Zheng Zuo},
        title={APTBert: Abstract Generation and Event Extraction from APT Reports},
        proceedings={Digital Forensics and Cyber Crime. 14th EAI International Conference, ICDF2C 2023, New York City, NY, USA, November 30, 2023, Proceedings, Part II},
        proceedings_a={ICDF2C PART 2},
        year={2024},
        month={4},
        keywords={Advanced Persistent Threat Event Extraction Abstract Generation Pre-training},
        doi={10.1007/978-3-031-56583-0_14}
    }
    
  • Chenxin Zhou
    Cheng Huang
    Yanghao Wang
    Zheng Zuo
    Year: 2024
    APTBert: Abstract Generation and Event Extraction from APT Reports
    ICDF2C PART 2
    Springer
    DOI: 10.1007/978-3-031-56583-0_14
Chenxin Zhou1, Cheng Huang1,*, Yanghao Wang1, Zheng Zuo
  • 1: School of Cyber Science and Engineering
*Contact email: opcodesec@gmail.com

Abstract

Due to the rapid development of information technology in this century, APT attacks(Advanced Persistent Threat) occur more frequently. The best way to combat APT is to quickly extract and integrate the roles of the attack events involved in the report from the APT reports that have been released, and to further perceive, analyze and prevent APT for the relevant security professionals. With the above issues in mind, an event extraction model for APT attack is proposed. This model, which is called APTBert, uses targeted text characterization results from the security filed text generated by the APTBert pre-training model to feed into the multi-head self-attention mechanism neural network for training, improving the accuracy of sequence labelling. At the experiment stage, on the basis of 1300 open source APT attack reports from security vendors and forums, we first pre-trained an APTBert pre-training model. We ended up annotating 600 APT reports with event roles, which were used to train the extraction model and evaluate the effect of event extraction. Experiment results show that the proposed method has better performance in training time and F1(77.4%) as compared to traditional extraction methods like BiLSTM.

Keywords
Advanced Persistent Threat Event Extraction Abstract Generation Pre-training
Published
2024-04-03
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-031-56583-0_14
Copyright © 2023–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL