About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Body Area Networks. Smart IoT and Big Data for Intelligent Health Management. 16th EAI International Conference, BODYNETS 2021, Virtual Event, October 25-26, 2021, Proceedings

Research Article

Opti-Speech-VMT: Implementation and Evaluation

Download(Requires a free EAI acccount)
7 downloads
Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-030-95593-9_19,
        author={Hiranya G. Kumar and Anthony R. Lawn and B. Prabhakaran and William F. Katz},
        title={Opti-Speech-VMT: Implementation and Evaluation},
        proceedings={Body Area Networks. Smart IoT and Big Data for Intelligent Health Management. 16th EAI International Conference, BODYNETS 2021, Virtual Event, October 25-26, 2021, Proceedings},
        proceedings_a={BODYNETS},
        year={2022},
        month={2},
        keywords={Speech Tongue Visual feedback Electromagnetic articulography Avatar 3D model Latency},
        doi={10.1007/978-3-030-95593-9_19}
    }
    
  • Hiranya G. Kumar
    Anthony R. Lawn
    B. Prabhakaran
    William F. Katz
    Year: 2022
    Opti-Speech-VMT: Implementation and Evaluation
    BODYNETS
    Springer
    DOI: 10.1007/978-3-030-95593-9_19
Hiranya G. Kumar1,*, Anthony R. Lawn1, B. Prabhakaran1, William F. Katz1
  • 1: University of Texas at Dallas, Richardson
*Contact email: hiranya@utdallas.edu

Abstract

We describe Opti-Speech-VMT, a prototype tongue tracking system that uses electromagnetic articulography to permit visual feedback during oral movements.Opti-Speech-VMT is specialized for visuomotor tracking (VMT) experiments in which participants follow an oscillating virtual target in the oral cavity using a tongue sensor. The algorithms for linear, curved, and custom trajectories are outlined, and new functionality is briefly presented. Because latency can potentially affect accuracy in VMT tasks, we examined system latency at both the API and total framework levels. Using a video camera, we compared the movement of a sensor (placed on an experimenter’s finger) against an oscillating target displayed on a computer monitor. The average total latency was 87.3 ms, with 69.8 ms attributable to the API, and 17.4 ms to Opti-Speech-VMT. These results indicate minimal reduction in performance due to Opti-Speech-VMT, and suggest the importance of the EMA hardware and signal processing optimizations used.

Keywords
Speech Tongue Visual feedback Electromagnetic articulography Avatar 3D model Latency
Published
2022-02-11
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-030-95593-9_19
Copyright © 2021–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL