Interactivity, Game Creation, Design, Learning, and Innovation. 5th International Conference, ArtsIT 2016, and First International Conference, DLI 2016, Esbjerg, Denmark, May 2–3, 2016, Proceedings

Research Article

A Multimodal Interaction Framework for Blended Learning

Download
199 downloads
  • @INPROCEEDINGS{10.1007/978-3-319-55834-9_24,
        author={Nikolaos Vidakis and Kalafatis Konstantinos and Georgios Triantafyllidis},
        title={A Multimodal Interaction Framework for Blended Learning},
        proceedings={Interactivity, Game Creation, Design, Learning, and Innovation. 5th International Conference, ArtsIT 2016, and First International Conference, DLI 2016, Esbjerg, Denmark, May 2--3, 2016, Proceedings},
        proceedings_a={ARTSIT \& DLI},
        year={2017},
        month={3},
        keywords={Multimodal human-computer interaction Blended learning},
        doi={10.1007/978-3-319-55834-9_24}
    }
    
  • Nikolaos Vidakis
    Kalafatis Konstantinos
    Georgios Triantafyllidis
    Year: 2017
    A Multimodal Interaction Framework for Blended Learning
    ARTSIT & DLI
    Springer
    DOI: 10.1007/978-3-319-55834-9_24
Nikolaos Vidakis1,*, Kalafatis Konstantinos1,*, Georgios Triantafyllidis2,*
  • 1: Technological Educational Institute of Crete
  • 2: Aalborg University Copenhagen
*Contact email: nv@ie.teicrete.gr, kalafatiskwstas@gmail.com, gt@create.aau.dk

Abstract

Humans interact with each other by utilizing the five basic senses as input modalities, whereas sounds, gestures, facial expressions etc. are utilized as output modalities. Multimodal interaction is also used between humans and their surrounding environment, although enhanced with further senses such as equilibrioception and the sense of balance. Computer interfaces that are considered as a different environment that human can interact with, lack of input and output amalgamation in order to provide a close to natural interaction. Multimodal human-computer interaction has sought to provide alternative means of communication with an application, which will be more natural than the traditional “windows, icons, menus, pointer” (WIMP) style. Despite the great amount of devices in existence, most applications make use of a very limited set of modalities, most notably speech and touch. This paper describes a multimodal framework enabling deployment of a vast variety of modalities, tailored appropriately for use in blended learning environment.