Research Article
Sentiment Summarization Learning Evaluation Using LSTM (Long Short Term Memory) Algorithm
@INPROCEEDINGS{10.4108/eai.27-11-2021.2315533, author={Achmad Yogie Setiawan and I Gede Mahendra Darmawiguna and Gede Aditra Pradnyana}, title={Sentiment Summarization Learning Evaluation Using LSTM (Long Short Term Memory) Algorithm}, proceedings={Proceedings of the 4th International Conference on Vocational Education and Technology, IConVET 2021, 27 November 2021, Singaraja, Bali, Indonesia}, publisher={EAI}, proceedings_a={ICONVET}, year={2022}, month={2}, keywords={sentiment analysis summarization lstm rogue}, doi={10.4108/eai.27-11-2021.2315533} }
- Achmad Yogie Setiawan
I Gede Mahendra Darmawiguna
Gede Aditra Pradnyana
Year: 2022
Sentiment Summarization Learning Evaluation Using LSTM (Long Short Term Memory) Algorithm
ICONVET
EAI
DOI: 10.4108/eai.27-11-2021.2315533
Abstract
Lecturer learning evaluation is a text that contains student reviews related to lecturer learning performance. The evaluations are many in number, making it difficult for lecturers to analyze. Sentiment analysis techniques are needed to classify student evaluations. The evaluation that has been classified still leaves a long and convoluted text. Text summarization is one solution to summarize a long text into a dense and informative text.There are two methods in text summarization, extractive and abstractive methods. This study applied an abstract method because the data used was an evaluation of lecturer learning whose reviews were written by students. The algorithm used for sentiment classification and text summarization used the Long Short Term Memory (LSTM) algorithm. The sentiment classification results were evaluated using a confusion matrix, namely testing the model with evaluation data. While the summary results were evaluated using ROUGE, which compared the summary results from the system with a manual summary by experts. In testing the confusion matrix system, the accuracy value was 0.902, and the f-measure value was 0.921. In the Recall-Oriented Understudy for Gisting Evaluation (ROGUE) test, the positive evaluation scored 0.16, and the negative evaluation scored 0.2. The developed tokenizer has not stored the tokens resulting from the training process. As a result, the prediction results when loading the model were not as good as when training was finished.