Research Article
Classification of Twitter Comments About the Image of the People's Representative Council (DPR) Using the K-Nearest Neighbor (K-NN) Method and Naïve Bayes
@INPROCEEDINGS{10.4108/eai.14-3-2019.2292042, author={Tri Ginanjar Laksana and Putri Rizqiyah and Rima Dias Ramadhani and Novanda Alim S.N}, title={Classification of Twitter Comments About the Image of the People's Representative Council (DPR) Using the K-Nearest Neighbor (K-NN) Method and Na\~{n}ve Bayes}, proceedings={Proceedings of the 1st International Conference of Global Education and Society Science, ICOGESS 2019,14 March, Medan, North Sumatera, Indonesia}, publisher={EAI}, proceedings_a={ICOGESS}, year={2020}, month={2}, keywords={accuracy dpr k-nn classification naive bayes twitter}, doi={10.4108/eai.14-3-2019.2292042} }
- Tri Ginanjar Laksana
Putri Rizqiyah
Rima Dias Ramadhani
Novanda Alim S.N
Year: 2020
Classification of Twitter Comments About the Image of the People's Representative Council (DPR) Using the K-Nearest Neighbor (K-NN) Method and Naïve Bayes
ICOGESS
EAI
DOI: 10.4108/eai.14-3-2019.2292042
Abstract
Currently, Twitter is not only a place to write microblogging messages, but has become a place where people express their aspirations. In 2018 the DPR received a lot of criticism from the public, especially through the twitter platform, therefore the data used in this study is the image of the community towards the DPR, which will be labeled positive and negative. The data used were 600 data consisting of 500 training data and 100 test data. The classification algorithm used in this study is K-NN and Naive Bayes, where K-NN is an algorithm that adheres to the concept of many neighborhoods while Naive Bayes is an algorithm that adheres to the concepts of probability and statistics. The final result of this study is to compare the accuracy of the two algorithms used, and after the data normalization processes to produce accuracy have been obtained results that K-NN get an accuracy of 80% at k = 19 and 20 while Naive Bayes get an accuracy of 77%. In this case the K-NN algorithm performs better than Naive Bayes because accuracy calculations can be performed repeatedly with different k until the best accuracy is achieved while the accuracy of Naive Bayes can only be done once. But even though K-NN has a higher accuracy than Naive Bayes, Naive Bayes still has a good performance in classification.