Intelligent Technologies for Interactive Entertainment. 8th International Conference, INTETAIN 2016, Utrecht, The Netherlands, June 28–30, 2016, Revised Selected Papers

Research Article

Exploring User-Defined Gestures and Voice Commands to Control an Unmanned Aerial Vehicle

Download
457 downloads
  • @INPROCEEDINGS{10.1007/978-3-319-49616-0_5,
        author={Ekaterina Peshkova and Martin Hitz and David Ahlstr\o{}m},
        title={Exploring User-Defined Gestures and Voice Commands to Control an Unmanned Aerial Vehicle},
        proceedings={Intelligent Technologies for Interactive Entertainment. 8th International Conference, INTETAIN 2016, Utrecht, The Netherlands, June 28--30, 2016, Revised Selected Papers},
        proceedings_a={INTETAIN},
        year={2017},
        month={1},
        keywords={},
        doi={10.1007/978-3-319-49616-0_5}
    }
    
  • Ekaterina Peshkova
    Martin Hitz
    David Ahlström
    Year: 2017
    Exploring User-Defined Gestures and Voice Commands to Control an Unmanned Aerial Vehicle
    INTETAIN
    Springer
    DOI: 10.1007/978-3-319-49616-0_5
Ekaterina Peshkova1,*, Martin Hitz1,*, David Ahlström1,*
  • 1: Alpen-Adria-Universität Klagenfurt
*Contact email: ekaterina.peshkova@aau.at, martin.hitz@aau.at, david.ahlstroem@aau.at

Abstract

In this paper we follow a participatory design approach to explore what novice users find to be intuitive ways to control an Unmanned Aerial Vehicle (UAV). We gather users’ suggestions for suitable voice and gesture commands through an online survey and a video interview and we also record the voice commands and gestures used by participants’ in a Wizard of Oz experiment where participants thought they were manoeuvring a UAV. We identify commonalities in the data collected from the three elicitation methods and assemble a collection of voice and gesture command sets for navigating a UAV. Furthermore, to obtain a deeper understanding of why our participants chose the gestures and voice commands they did, we analyse and discuss the collected data in terms of mental models and identify three prevailing classes of mental models that likely guided many of our participants in their choice of voice and gesture commands.