About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
ArtsIT, Interactivity and Game Creation. Creative Heritage. New Perspectives from Media Arts and Artificial Intelligence. 10th EAI International Conference, ArtsIT 2021, Virtual Event, December 2-3, 2021, Proceedings

Research Article

SOUND OF(F): Contextual Storytelling Using Machine Learning Representations of Sound and Music

Download(Requires a free EAI acccount)
3 downloads
Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-030-95531-1_23,
        author={Zeynep Erol and Zhiyuan Zhang and Eray \O{}zg\'{y}nay and Ray LC},
        title={SOUND OF(F): Contextual Storytelling Using Machine Learning Representations of Sound and Music},
        proceedings={ArtsIT, Interactivity and Game Creation. Creative Heritage. New Perspectives from Media Arts and Artificial Intelligence. 10th EAI International Conference, ArtsIT 2021, Virtual Event, December 2-3, 2021, Proceedings},
        proceedings_a={ARTSIT},
        year={2022},
        month={2},
        keywords={Spatial audio Virtual Reality Art Machine learning t-SNE Sound Visualization Nonlinear Listnening},
        doi={10.1007/978-3-030-95531-1_23}
    }
    
  • Zeynep Erol
    Zhiyuan Zhang
    Eray Özgünay
    Ray LC
    Year: 2022
    SOUND OF(F): Contextual Storytelling Using Machine Learning Representations of Sound and Music
    ARTSIT
    Springer
    DOI: 10.1007/978-3-030-95531-1_23
Zeynep Erol, Zhiyuan Zhang, Eray Özgünay, Ray LC,*
    *Contact email: LC@raylc.org

    Abstract

    In dreams, one’s life experiences are jumbled together, so that characters can represent multiple people in your life and sounds can run together without sequential order. To show one’s memories in a dream in a more contextual way, we represent environments and sounds using machine learning approaches that take into account the totality of a complex dataset. The immersive environment uses machine learning to computationally cluster sounds in thematic scenes to allow audiences to grasp the dimensions of the complexity in a dream-like scenario. We applied the t-SNE algorithm to collections of music and voice sequences to explore the way interactions in immersive space can be used to convert temporal sound data into spatial interactions. We designed both 2D and 3D interactions, as well as headspace vs. controller interactions in two case studies, one on segmenting a single work of music and one on a collection of sound fragments, applying it to a Virtual Reality (VR) artwork about replaying memories in a dream. We found that audiences can enrich their experience of the story without necessarily gaining an understanding of the artwork through the machine-learning generated soundscapes. This provides a method for experiencing the temporal sound sequences in an environment spatially using nonlinear exploration in VR.

    Keywords
    Spatial audio Virtual Reality Art Machine learning t-SNE Sound Visualization Nonlinear Listnening
    Published
    2022-02-10
    Appears in
    SpringerLink
    http://dx.doi.org/10.1007/978-3-030-95531-1_23
    Copyright © 2021–2025 ICST
    EBSCOProQuestDBLPDOAJPortico
    EAI Logo

    About EAI

    • Who We Are
    • Leadership
    • Research Areas
    • Partners
    • Media Center

    Community

    • Membership
    • Conference
    • Recognition
    • Sponsor Us

    Publish with EAI

    • Publishing
    • Journals
    • Proceedings
    • Books
    • EUDL