Intelligent Technologies for Interactive Entertainment. 5th International ICST Conference, INTETAIN 2013, Mons, Belgium, July 3-5, 2013, Revised Selected Papers

Research Article

: On-Stage Improvised Audio Collage by Content-Based Similarity and Gesture Recognition

Download80 downloads
  • @INPROCEEDINGS{10.1007/978-3-319-03892-6_14,
        author={Christian Frisson and Gauthier Keyaerts and Fabien Grisard and St\^{e}phane Dupont and Thierry Ravet and Fran\`{e}ois Zaj\^{e}ga and Laura Colmenares Guerra and Todor Todoroff and Thierry Dutoit},
        title={
                  : On-Stage Improvised Audio Collage by Content-Based Similarity and Gesture Recognition},
        proceedings={Intelligent Technologies for Interactive Entertainment. 5th International ICST Conference, INTETAIN 2013, Mons, Belgium, July 3-5, 2013, Revised Selected Papers},
        proceedings_a={INTETAIN},
        year={2014},
        month={6},
        keywords={Human-music interaction audio collage content-based similarity gesture recognition depth cameras digital audio effects},
        doi={10.1007/978-3-319-03892-6_14}
    }
    
  • Christian Frisson
    Gauthier Keyaerts
    Fabien Grisard
    Stéphane Dupont
    Thierry Ravet
    François Zajéga
    Laura Colmenares Guerra
    Todor Todoroff
    Thierry Dutoit
    Year: 2014
    : On-Stage Improvised Audio Collage by Content-Based Similarity and Gesture Recognition
    INTETAIN
    Springer
    DOI: 10.1007/978-3-319-03892-6_14
Christian Frisson1,*, Gauthier Keyaerts2, Fabien Grisard, Stéphane Dupont1, Thierry Ravet1, François Zajéga1, Laura Colmenares Guerra1, Todor Todoroff1, Thierry Dutoit1
  • 1: University of Mons (UMONS)
  • 2: aka Very Mash’ta and the Aktivist, artist residing in Brussels
*Contact email: christian.frisson@umons.ac.be

Abstract

In this paper we present the outline of a performance in-progress. It brings together the skilled musical practices from Belgian audio collagist Gauthier Keyaerts aka ; and the realtime, content-based audio browsing capabilities of the and applications developed by the remaining authors. The tool derived from named aids the preparation of collections of stem audio loops before performances by extracting content-based features (for instance timbre) used for the positioning of these sounds on a 2D visual map. The tool becomes an embodied on-stage instrument, based on a user interface which uses a depth-sensing camera, and augmented with the public projection of the 2D map. The camera tracks the position of the artist within the sensing area to trigger sounds similarly to the installation. It also senses gestures from the performer interpreted with the framework, allowing to apply sound effects based on bodily movements. blurs the boundary between performance and preparation, navigation and improvisation, installations and concerts.