About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
phat 24(1):

Research Article

Suicidal Ideation Detection and Influential Keyword Extraction from Twitter using Deep Learning (SID)

Download101 downloads
Cite
BibTeX Plain Text
  • @ARTICLE{10.4108/eetpht.10.6042,
        author={Xie-Yi. G.},
        title={Suicidal Ideation Detection and Influential Keyword Extraction from Twitter using Deep Learning (SID)},
        journal={EAI Endorsed Transactions on Pervasive Health and Technology},
        volume={10},
        number={1},
        publisher={EAI},
        journal_a={PHAT},
        year={2024},
        month={12},
        keywords={attention mechanism, Bi-LSTM, deep learning, NLP, text classification},
        doi={10.4108/eetpht.10.6042}
    }
    
  • Xie-Yi. G.
    Year: 2024
    Suicidal Ideation Detection and Influential Keyword Extraction from Twitter using Deep Learning (SID)
    PHAT
    EAI
    DOI: 10.4108/eetpht.10.6042
Xie-Yi. G.1,*
  • 1: Asia Pacific University of Technology & Innovation
*Contact email: tp056669@apu.edu.my

Abstract

INTRODUCTION: This paper focuses on building a text analytics-based solution to help the suicide prevention communities to detect suicidal signals from text data collected from online platform and take action to prevent the tragedy. OBJECTIVES: The objective of the paper is to build a suicide ideation detection (SID) model that can classify text as suicidal or non-suicidal and a keyword extractor to extracted influential keywords that are possible suicide risk factors from the suicidal text. METHODS: This paper proposed an attention-based Bi-LSTM model. An attention layer can assist the deep learning model to capture influential keywords of the model classifying decisions and hence reflects the important keywords from text which highly related to suicide risk factors or reason of suicide ideation that can be extracted from text. RESULTS: Bi-LSTM with Word2Vec embedding have the highest F1-score of 0.95. Yet, attention-based Bi-LSTM with word2vec embedding that has 0.94 F1-score can produce better accuracy when dealing with new and unseen data as it has a good fit learning curve. CONCLUSION: The absence of a systematic approach to validate and examine the keyword extracted by the attention mechanism and RAKE algorithm is a gap that needed to be resolved. The future work of this paper can focus on both systematic and standard approach for validating the accuracy of the keywords.

Keywords
attention mechanism, Bi-LSTM, deep learning, NLP, text classification
Received
2024-12-04
Accepted
2024-12-04
Published
2024-12-04
Publisher
EAI
http://dx.doi.org/10.4108/eetpht.10.6042

Copyright © 2024 Xie-Yi G., licensed to EAI. This is an open access article distributed under the terms of the CC BY-NC-SA 4.0, which permits copying, redistributing, remixing, transformation, and building upon the material in any medium so long as the original work is properly cited.

EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL