
Research Article
Opti-Speech-VMT: Implementation and Evaluation
@INPROCEEDINGS{10.1007/978-3-030-95593-9_19, author={Hiranya G. Kumar and Anthony R. Lawn and B. Prabhakaran and William F. Katz}, title={Opti-Speech-VMT: Implementation and Evaluation}, proceedings={Body Area Networks. Smart IoT and Big Data for Intelligent Health Management. 16th EAI International Conference, BODYNETS 2021, Virtual Event, October 25-26, 2021, Proceedings}, proceedings_a={BODYNETS}, year={2022}, month={2}, keywords={Speech Tongue Visual feedback Electromagnetic articulography Avatar 3D model Latency}, doi={10.1007/978-3-030-95593-9_19} }
- Hiranya G. Kumar
Anthony R. Lawn
B. Prabhakaran
William F. Katz
Year: 2022
Opti-Speech-VMT: Implementation and Evaluation
BODYNETS
Springer
DOI: 10.1007/978-3-030-95593-9_19
Abstract
We describe Opti-Speech-VMT, a prototype tongue tracking system that uses electromagnetic articulography to permit visual feedback during oral movements.Opti-Speech-VMT is specialized for visuomotor tracking (VMT) experiments in which participants follow an oscillating virtual target in the oral cavity using a tongue sensor. The algorithms for linear, curved, and custom trajectories are outlined, and new functionality is briefly presented. Because latency can potentially affect accuracy in VMT tasks, we examined system latency at both the API and total framework levels. Using a video camera, we compared the movement of a sensor (placed on an experimenter’s finger) against an oscillating target displayed on a computer monitor. The average total latency was 87.3 ms, with 69.8 ms attributable to the API, and 17.4 ms to Opti-Speech-VMT. These results indicate minimal reduction in performance due to Opti-Speech-VMT, and suggest the importance of the EMA hardware and signal processing optimizations used.