Research Article
Multimodal Mobile Collaboration Prototype Used in a Find, Fix, and Tag Scenario
@INPROCEEDINGS{10.1007/978-3-642-36632-1_7, author={Gregory Burnett and Thomas Wischgoll and Victor Finomore and Andres Calvo}, title={Multimodal Mobile Collaboration Prototype Used in a Find, Fix, and Tag Scenario}, proceedings={Mobile Computing, Applications, and Services. 4th International Conference, MobiCASE 2012, Seattle, WA, USA, October 11-12, 2012. Revised Selected Papers}, proceedings_a={MOBICASE}, year={2013}, month={2}, keywords={Multimodal interfaces mobile computing remote collaboration}, doi={10.1007/978-3-642-36632-1_7} }
- Gregory Burnett
Thomas Wischgoll
Victor Finomore
Andres Calvo
Year: 2013
Multimodal Mobile Collaboration Prototype Used in a Find, Fix, and Tag Scenario
MOBICASE
Springer
DOI: 10.1007/978-3-642-36632-1_7
Abstract
Given recent technological advancements in mobile devices, military research initiatives are investigating these devices as a means to support multimodal cooperative interactions. Military components are executing dynamic combat and humanitarian missions while dismounted and on the move. Paramount to their success is timely and effective information sharing and mission planning to enact more effective actions. In this paper, we describe a prototype multimodal collaborative Android application. The mobile application was designed to support real-time battlefield perspective, acquisition, and dissemination of information among distributed operators. The prototype application was demonstrated in a scenario where teammates utilize different features of the software to collaboratively identify and deploy a virtual tracker-type device on hostile entities. Results showed significant improvements in completion times when users visually shared their perspectives versus relying on verbal descriptors. Additionally, the use of shared video significantly reduced the required utterances to complete the task.