About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
ct 17(10): e5

Research Article

A Multimodal Interaction Framework for Blended Learning

Download1458 downloads
Cite
BibTeX Plain Text
  • @ARTICLE{10.4108/eai.4-9-2017.153057,
        author={N. Vidakis},
        title={A Multimodal Interaction Framework for Blended Learning},
        journal={EAI Endorsed Transactions on Creative Technologies},
        volume={4},
        number={10},
        publisher={EAI},
        journal_a={CT},
        year={2017},
        month={1},
        keywords={Multimodal Human-Computer Interaction, Blended Learning.},
        doi={10.4108/eai.4-9-2017.153057}
    }
    
  • N. Vidakis
    Year: 2017
    A Multimodal Interaction Framework for Blended Learning
    CT
    EAI
    DOI: 10.4108/eai.4-9-2017.153057
N. Vidakis1,*
  • 1: Department of Informatics Engineering, Technological Educational Institute of Crete, Heralkion 71500, Greece
*Contact email: nv@ie.teicrete.gr

Abstract

Humans interact with each other by utilizing the five basic senses as input modalities, whereas sounds, gestures, facial expressions etc. are utilized as output modalities. Multimodal interaction is also used between humans and their surrounding environment, although enhanced with further senses such as equilibrioception and the sense of balance. Computer interfaces that are considered as a different environment that human can interact with, lack of input and output amalgamation in order to provide a close to natural interaction. Multimodal human-computer interaction has sought to provide alternative means of communication with an application, which will be more natural than the traditional “windows, icons, menus, pointer” (WIMP) style. Despite the great amount of devices in existence, most applications make use of a very limited set of modalities, most notably speech and touch. This paper describes a multimodal framework enabling deployment of a vast variety of modalities, tailored appropriately for use in blended learning environment and introduces a unified and effective framework for multimodal interaction called COALS.

Keywords
Multimodal Human-Computer Interaction, Blended Learning.
Received
2016-11-18
Accepted
2016-12-28
Published
2017-01-04
Publisher
EAI
http://dx.doi.org/10.4108/eai.4-9-2017.153057

Copyright © 2017 N. Vidakis, licensed to EAI. This is an open access article distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/3.0/), which permits unlimited use, distribution and reproduction in any medium so long as the original work is properly cited.

EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL