About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
ArtsIT, Interactivity and Game Creation. 11th EAI International Conference, ArtsIT 2022, Faro, Portugal, November 21-22, 2022, Proceedings

Research Article

Desiring Machines and Affective Virtual Environments

Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-031-28993-4_28,
        author={Jorge Forero and Gilberto Bernardes and M\^{o}nica Mendes},
        title={Desiring Machines and Affective Virtual Environments},
        proceedings={ArtsIT, Interactivity and Game Creation. 11th EAI International Conference, ArtsIT 2022, Faro, Portugal, November 21-22, 2022, Proceedings},
        proceedings_a={ARTSIT},
        year={2023},
        month={4},
        keywords={Affective Computing Speech Emotion Recognition Intelligent Virtual Environments Virtual Reality Tonal Interval Space Machine Learning},
        doi={10.1007/978-3-031-28993-4_28}
    }
    
  • Jorge Forero
    Gilberto Bernardes
    Mónica Mendes
    Year: 2023
    Desiring Machines and Affective Virtual Environments
    ARTSIT
    Springer
    DOI: 10.1007/978-3-031-28993-4_28
Jorge Forero1,*, Gilberto Bernardes1, Mónica Mendes2
  • 1: Faculty of Engineering, University of Porto
  • 2: ITI-LARSyS, Faculdade de Belas Artes
*Contact email: jfforero@ludique.cl

Abstract

Language is closely related to how we perceive ourselves and signify our reality. In this scope, we createdDesiring Machines, an interactive media art project that allows the experience of affective virtual environments adopting speech emotion recognition as the leading input source. Participants can share their emotions by speaking, singing, reciting poetry, or making any vocal sounds to generate virtual environments on the run. Our contribution combines two machine learning models. We propose a long-short term memory and a convolutional neural network to predict four main emotional categories from high-level semantic and low-level paralinguistic acoustic features. Predicted emotions are mapped to audiovisual representations by an end-to-end process encoding emotion in virtual environments. We use a generative model of chord progressions to transfer speech emotion into music based on the tonal interval space. Also, we implement a generative adversarial network to synthesize an image from the transcribed speech-to-text. The generated visuals are used as the style image in the style-transfer process onto an equirectangular projection of a spherical panorama selected for each emotional category. The result is an immersive virtual space encapsulating emotions in spheres disposed into a 3D environment. Users can create new affective representations or interact with other previously encoded instances (This ArtsIT publication is an extended version of the earlier abstract presented at the ACM MM22 [1]).

Keywords
Affective Computing Speech Emotion Recognition Intelligent Virtual Environments Virtual Reality Tonal Interval Space Machine Learning
Published
2023-04-02
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-031-28993-4_28
Copyright © 2022–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL