Research Article
Multi-sensored Vision for Autonomous Production of Personalized Video Summaries
@INPROCEEDINGS{10.1007/978-3-642-35145-7_15, author={Fan Chen and Damien Delannay and Christophe Vleeschouwer}, title={Multi-sensored Vision for Autonomous Production of Personalized Video Summaries}, proceedings={User Centric Media. Second International ICST Conference, UCMedia 2010, Palma de Mallorca, Spain, September 1-3, 2010. Revised Selected Papers}, proceedings_a={UCMEDIA}, year={2012}, month={12}, keywords={Automatic production personalized summarization multi-camera}, doi={10.1007/978-3-642-35145-7_15} }
- Fan Chen
Damien Delannay
Christophe Vleeschouwer
Year: 2012
Multi-sensored Vision for Autonomous Production of Personalized Video Summaries
UCMEDIA
Springer
DOI: 10.1007/978-3-642-35145-7_15
Abstract
Democratic and personalized production of multimedia content is a challenge for content providers. In this paper, members of the FP7 APIDIS consortium explain how it is possible to address this challenge by building on computer vision tools to automate the collection and distribution of audiovisual content. In a typical application scenario, a network of cameras covers the scene of interest, and distributed analysis and interpretation of the scene are exploited to decide what to show or not to show about the event, so as to edit a video from of a valuable subset of the streams provided by each individual camera. Generation of personalized summaries through automatic organization of stories is also considered. In final, the proposed technology provides practical solutions to a wide range of applications, such as personalized access to local sport events through a web portal, cost-effective and fully automated production of content for small-audience, or automatic log in of annotations.