About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
sis 24(6):

Research Article

Evaluating Performance of Conversational Bot Using Seq2Seq Model and Attention Mechanism

Download101 downloads
Cite
BibTeX Plain Text
  • @ARTICLE{10.4108/eetsis.5457,
        author={Karandeep Saluja and Shashwat Agarwal and Sanjeev Kumar and Tanupriya Choudhury},
        title={Evaluating Performance of Conversational Bot Using Seq2Seq Model and Attention Mechanism},
        journal={EAI Endorsed Transactions on Scalable Information Systems},
        volume={11},
        number={6},
        publisher={EAI},
        journal_a={SIS},
        year={2024},
        month={3},
        keywords={Seq2Seq, Attention Mechanism, Perplexity, BLEU, ROUGE},
        doi={10.4108/eetsis.5457}
    }
    
  • Karandeep Saluja
    Shashwat Agarwal
    Sanjeev Kumar
    Tanupriya Choudhury
    Year: 2024
    Evaluating Performance of Conversational Bot Using Seq2Seq Model and Attention Mechanism
    SIS
    EAI
    DOI: 10.4108/eetsis.5457
Karandeep Saluja1, Shashwat Agarwal1, Sanjeev Kumar1,*, Tanupriya Choudhury2
  • 1: University of Petroleum and Energy Studies
  • 2: Graphic Era University
*Contact email: sanjeevkumar@outlook.in

Abstract

The Chat-Bot utilizes Sequence-to-Sequence Model with the Attention Mechanism, in order to interpret and address user inputs effectively. The whole model consists of Data gathering, Data preprocessing, Seq2seq Model, Training and Tuning. Data preprocessing involves cleaning of any irrelevant data, before converting them into the numerical format. The Seq2Seq Model is comprised of two components: an Encoder and a Decoder. Both Encoder and Decoder along with the Attention Mechanism allow dialogue management, which empowers the Model to answer the user in the most accurate and relevant manner. The output generated by the Bot is in the Natural Language only. Once the building of the Seq2Seq Model is completed, training of the model takes place in which the model is fed with the preprocessed data, during training it tries to minimize the loss function between the predicted output and the ground truth output. Performance is computed using metrics such as perplexity, BLEU score, and ROUGE score on a held-out validation set. In order to meet non-functional requirements, our system needs to maintain a response time of under one second with an accuracy target exceeding 90%.

Keywords
Seq2Seq, Attention Mechanism, Perplexity, BLEU, ROUGE
Received
2023-12-10
Accepted
2024-03-11
Published
2024-03-18
Publisher
EAI
http://dx.doi.org/10.4108/eetsis.5457

“Copyright © 2024 K. Saluja et al., licensed to EAI. This is an open access article distributed under the terms of the CC BY-NC-SA 4.0, which permits copying, redistributing, remixing, transformation, and building upon the material in any medium so long as the original work is properly cited.”

EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL