About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Multimedia Technology and Enhanced Learning. 5th EAI International Conference, ICMTEL 2023, Leicester, UK, April 28-29, 2023, Proceedings, Part IV

Research Article

A Sign Language Recognition Based on Optimized Transformer Target Detection Model

Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-031-50580-5_16,
        author={Li Liu and Zhiwei Yang and Yuqi Liu and Xinyu Zhang and Kai Yang},
        title={A Sign Language Recognition Based on Optimized Transformer Target Detection Model},
        proceedings={Multimedia Technology and Enhanced Learning. 5th EAI International Conference, ICMTEL 2023, Leicester, UK, April 28-29, 2023, Proceedings, Part IV},
        proceedings_a={ICMTEL PART 4},
        year={2024},
        month={2},
        keywords={Sign Language Recognition Target Detection Model Neutral Network},
        doi={10.1007/978-3-031-50580-5_16}
    }
    
  • Li Liu
    Zhiwei Yang
    Yuqi Liu
    Xinyu Zhang
    Kai Yang
    Year: 2024
    A Sign Language Recognition Based on Optimized Transformer Target Detection Model
    ICMTEL PART 4
    Springer
    DOI: 10.1007/978-3-031-50580-5_16
Li Liu1, Zhiwei Yang1, Yuqi Liu1, Xinyu Zhang1, Kai Yang1,*
  • 1: Nanjing Normal University of Special Education
*Contact email: Yk@njts.edu.cn

Abstract

Sign language is the communication medium between deaf and hearing people and has unique grammatical rules. Compared with isolated word recognition, continuous sign language recognition is more context-dependent, semantically complex, and challenging to segment temporally. The current research still needs to be improved regarding recognition accuracy, background interference resistance, and overfitting resistance. The unique coding and decoding structure of the Transformer model can be used for sign language recognition. However, its position encoding method and multi-headed self-attentive mechanism still need to be improved. This paper proposes a sign language recognition algorithm based on the improved Transformer target detection network model (SL-OTT). The continuous sign language recognition method based on the improved Transformer model computes each word vector in a continuous sign language sentence in multiple cycles by multiplexed position encoding with parameters to accurately grasp the position information between each word; adds learnable memory key-value pairs to the attention module to form a persistent memory module, and expands the number of attention heads and embedding dimension by linear high-dimensional mapping in equal proportion. The proposed method achieves competitive recognition results on the most authoritative continuous sign language dataset.

Keywords
Sign Language Recognition Target Detection Model Neutral Network
Published
2024-02-21
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-031-50580-5_16
Copyright © 2023–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL