1st International ICST Workshop on Haptic in Ambient Systems

Research Article

Implication of Multimodality in Ambient Interfaces

  • @INPROCEEDINGS{10.4108/ICST.AMBISYS2008.2905,
        author={Priyamvada Tripathi and Sethuraman Panchanathan},
        title={Implication of Multimodality in Ambient Interfaces},
        proceedings={1st International ICST Workshop on Haptic in Ambient Systems},
        keywords={Exclusive multimodal Inclusive multimodal Synergic multimodal Accessibility.},
  • Priyamvada Tripathi
    Sethuraman Panchanathan
    Year: 2010
    Implication of Multimodality in Ambient Interfaces
    DOI: 10.4108/ICST.AMBISYS2008.2905
Priyamvada Tripathi1,*, Sethuraman Panchanathan1,*
  • 1: Center for Cognitive Ubiquitous Computing, School of Computing and Informatics, Arizona State University
*Contact email: pia@asu.edu, panch@asu.edu


Ambient interfaces have long held the promise of enhanced and effective human machine interaction. Ambient interfaces can adapt to human activity allowing seamless exchange of information. This goal requires a coordinated development effort that incorporates a thorough understanding of human perceptual system in the design of interfaces. In this way, ambient interfaces can not only supplement the current activities of humans but also expand their functionality to novel approaches in interaction. Since humans interact with their environments through multiple channels, multimodality is an indispensable aspect of ambient interfaces. Multimodality is a broad term that encompasses not only sensory aspects of human- machine interaction but also cognitive interaction that is responsible for a unified perception. Thus, sensory formats are essential in construction of ambient interfaces but do not constitute the complete picture. In this paper, we propose that ambience can only be achieved when multiple modalities are considered in toto. In addition, multimodality cannot be implemented in isolation from the desired tasks and their pertaining contexts. We propose an integrated view of multimodal interfaces that differentiates itself from the previous conception of multimodal human-machine interaction. Multimodal interfaces have become synonymous with voice-andgesture interaction. We propose that this category of multimodality does not fully exploit or include the human user’s capability of effectively interacting and communicating with the machine. Both concepts of semantic congruence and syntactic constraints can be dealt with only when we attempt to create interfaces that include the human perceptual system in its design. This can be achieved by a staged evaluation process where in each interface is associated with its joint performance value and accessibility. Besides this, the interface and human must share the same real-world model for effective reference.