Research Article
Wearable Vision for Retrieving Architectural Details in Augmented Tourist Experiences
@INPROCEEDINGS{10.4108/icst.intetain.2015.260034, author={Stefano Alletto and Davide Abati and Giuseppe Serra and Rita Cucchiara}, title={Wearable Vision for Retrieving Architectural Details in Augmented Tourist Experiences}, proceedings={7th International Conference on Intelligent Technologies for Interactive Entertainment}, publisher={IEEE}, proceedings_a={INTETAIN}, year={2015}, month={8}, keywords={computer vision egocentric vision smart guides enhanced tourist experience}, doi={10.4108/icst.intetain.2015.260034} }
- Stefano Alletto
Davide Abati
Giuseppe Serra
Rita Cucchiara
Year: 2015
Wearable Vision for Retrieving Architectural Details in Augmented Tourist Experiences
INTETAIN
ICST
DOI: 10.4108/icst.intetain.2015.260034
Abstract
The interest in cultural cities is in constant growth, and so is the demand for new multimedia tools and applications that enrich their fruition. In this paper we propose an egocentric vision system to enhance tourists' cultural heritage experience. Exploiting a wearable board and a glass-mounted camera, the visitor can retrieve architectural details of the historical building he is observing and receive related multimedia contents. To obtain an effective retrieval procedure we propose a visual descriptor based on the covariance of local features. Differently than the common Bag of Words approaches our feature vector does not rely on a generated visual vocabulary, removing the dependence from a specific dataset and obtaining a reduction of the computational cost. 3D modeling is used to achieve a precise visitor's localization that allows browsing visible relevant details that the user may otherwise miss.
Experimental results conducted on a publicly available cultural heritage dataset show that the proposed feature descriptor outperforms Bag of Words techniques.