About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Broadband Communications, Networks, and Systems. 14th EAI International Conference, BROADNETS 2024, Hyderabad, India, February 16–17, 2024, Proceedings, Part II

Research Article

Vision-Based Sign Language Recognition and Multilingual Translation for Facilitating Deaf and Mute Communication

Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-031-81171-5_12,
        author={S. V. Vasantha and A. Ashwini and M. Avinash and M. Yuvaraj and R. Manisha and Shirina Samreen},
        title={Vision-Based Sign Language Recognition and Multilingual Translation for Facilitating Deaf and Mute Communication},
        proceedings={Broadband Communications, Networks, and Systems. 14th EAI International Conference, BROADNETS 2024, Hyderabad, India, February 16--17, 2024, Proceedings, Part II},
        proceedings_a={BROADNETS PART 2},
        year={2025},
        month={2},
        keywords={Sign Language Object Detection Language Translation Vision-based Interface SSD MobileNetV2},
        doi={10.1007/978-3-031-81171-5_12}
    }
    
  • S. V. Vasantha
    A. Ashwini
    M. Avinash
    M. Yuvaraj
    R. Manisha
    Shirina Samreen
    Year: 2025
    Vision-Based Sign Language Recognition and Multilingual Translation for Facilitating Deaf and Mute Communication
    BROADNETS PART 2
    Springer
    DOI: 10.1007/978-3-031-81171-5_12
S. V. Vasantha1, A. Ashwini1, M. Avinash1,*, M. Yuvaraj1, R. Manisha1, Shirina Samreen2
  • 1: Department of CSE
  • 2: College of Computer and Information Sciences, Majmaah University
*Contact email: machikaavinash@gmail.com

Abstract

Sign language serves as the primary means of communication for individuals who are both deaf and mute. Nevertheless, communicating with those who do not understand sign language poses a significant challenge. Due to the structural differences between sign language and written/spoken languages, a communication barrier exists. Consequently, the interaction between deaf and mute individuals heavily relies on visual-based communication. To address this issue, a vision-based interface system has been developed to facilitate communication between deaf and mute individuals and the broader public. This system offers an interface capable of translating sign language gestures into text, enabling those unfamiliar with sign language to readily comprehend the message. The proposed system involves real-time video analysis for sign language identification and recognition, followed by the conversion of these visual inputs into English and other native languages. In this paper, French and Japanese languages are supported through Google API translator services, utilizing the SSD MobileNetV2 model for sign language recognition. The system achieved an outstanding overall accuracy of 0.98, with individual sign tokens demonstrating average accuracy scores of 0.99 for “Hello,” 0.99 for “I love you,” 0.98 for “Thank you,” 0.97 for “Yes,” 0.96 for “No,” and 0.96 for “Help.”

Keywords
Sign Language Object Detection Language Translation Vision-based Interface SSD MobileNetV2
Published
2025-02-07
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-031-81171-5_12
Copyright © 2024–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL