Research Article
A Multimodal Interaction Framework for Blended Learning
@ARTICLE{10.4108/eai.4-9-2017.153057, author={N. Vidakis}, title={A Multimodal Interaction Framework for Blended Learning}, journal={EAI Endorsed Transactions on Creative Technologies}, volume={4}, number={10}, publisher={EAI}, journal_a={CT}, year={2017}, month={1}, keywords={Multimodal Human-Computer Interaction, Blended Learning.}, doi={10.4108/eai.4-9-2017.153057} }
- N. Vidakis
Year: 2017
A Multimodal Interaction Framework for Blended Learning
CT
EAI
DOI: 10.4108/eai.4-9-2017.153057
Abstract
Humans interact with each other by utilizing the five basic senses as input modalities, whereas sounds, gestures, facial expressions etc. are utilized as output modalities. Multimodal interaction is also used between humans and their surrounding environment, although enhanced with further senses such as equilibrioception and the sense of balance. Computer interfaces that are considered as a different environment that human can interact with, lack of input and output amalgamation in order to provide a close to natural interaction. Multimodal human-computer interaction has sought to provide alternative means of communication with an application, which will be more natural than the traditional “windows, icons, menus, pointer” (WIMP) style. Despite the great amount of devices in existence, most applications make use of a very limited set of modalities, most notably speech and touch. This paper describes a multimodal framework enabling deployment of a vast variety of modalities, tailored appropriately for use in blended learning environment and introduces a unified and effective framework for multimodal interaction called COALS.
Copyright © 2017 N. Vidakis, licensed to EAI. This is an open access article distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/3.0/), which permits unlimited use, distribution and reproduction in any medium so long as the original work is properly cited.