Research Article
Quick Browsing of Shared Experience Videos Based on Conversational Field Detection
@INPROCEEDINGS{10.1007/978-3-319-90740-6_3, author={Kai Toyama and Yasuyuki Sumi}, title={Quick Browsing of Shared Experience Videos Based on Conversational Field Detection}, proceedings={Mobile Computing, Applications, and Services. 9th International Conference, MobiCASE 2018, Osaka, Japan, February 28 -- March 2, 2018, Proceedings}, proceedings_a={MOBICASE}, year={2018}, month={5}, keywords={Smart video viewing Information cues First-person view videos Conversational fields}, doi={10.1007/978-3-319-90740-6_3} }
- Kai Toyama
Yasuyuki Sumi
Year: 2018
Quick Browsing of Shared Experience Videos Based on Conversational Field Detection
MOBICASE
Springer
DOI: 10.1007/978-3-319-90740-6_3
Abstract
We propose a system to aid the browsing of shared experience data that includes multiple first-person view videos. Using this system, users can avoid the tedious task of searching through lengthy videos. Our system aids browsing by displaying situational information cues on the video seek-bar, and visualizing node graphs showing members participating in the scenes and their approximate location. Users of our system can search and browse events with the help of cues indicating participant names and their locations. We use auditory similarity to detect conversational fields in order to detect the dynamics of groups in crowded areas. We conduct an experiment to evaluate the ability of our system to decrease the time needed for finding specified scenes in lifelog videos. Our experimental results suggest that our system can aid the browsing of videos that include one’s own experiences, but cannot be proven to aid the browsing of unfamiliar data.