Emote VR Voicer

Emote VR Voicer is an AHRC Catalyst grant funded project. Led by Dr. Adinda van ’t Klooster, one of our PRG members, it is a further development from her two previous voice-controlled VR interfaces: VRoar and the AudioVirtualizer. Rather than just using live sound features to manipulate graphics, this interface also uses speech recognition and meaning classification to analyse the meaning of uttered words and their emotional intonation. Such AI facilities are already available in the public domain through code libraries but haven’t yet been integrated in artistic projects.

This project allows the team to answer the following research questions:

  • How can we develop a real-time VR app that understands the meaning of the spoken/sung word and maps semantically to aesthetically rewarding graphics?
  • Which mappings and designs work best for an interactive VR app that aims to encourage people to play with their voice, and increase wellbeing?

The chosen graphics will be abstract so the aim is not to make an illustrative feedback system but rather to make an emotionally intelligent interactive system that will encourage people to discover and extend the limits of their voice.

Project partners:

Dr Adinda van ’t Klooster is the project lead and brain child of this project. As an independent artist over the past twenty+ years, she has used a variety of live interactive audiovisual technologies to create immersive interactive interfaces, including VR art games, light and sound installation, interactive audiovisual performance and interactive sculpture. At SODA/Manchester Metropolitan University she is a senior lecturer in interdisciplinary digital art and teaches across the different courses and levels, up to PhD supervision.

Shubhada Londhe is the Research Associate for this project. As a VR and AI developer she will work on the machine learning, AI and games engine development for this project.

Dr Robyn Dowlen is a Research Fellow at Edgehill University. She has an extensive background in the field of culture, health, and wellbeing. Robyn’s research centres on developing methods and approaches for capturing ‘in the moment ‘ experiences in a dementia context, exploring how music and other creative activities can support meaningful moments of connection for people with dementia and those who support them. In this project she advises on approaches related to measuring wellbeing, and undertakes part of the evaluation.

Dr Jason Hockman is Reader in Music and Sound at Manchester Metropolitan University (SODA) and advises through periodic, strategic consultations aimed at steering the team towards relevant AI techniques, ensuring the project remains current and on track with regards to technological and methodological advancements in the field.

Prof. Carlo Harvey is a creative technologist whose interdisciplinary work blends games, computer graphics, machine learning, acoustics, virtual production and immersive media. For this project he is advising on animation and games processes and also mentors Adinda due to his previous experience with grant funded research projects.

Pippa Anderson, vocal coach and vocal rehabilitation specialist helped with the first phase of the project, which included bringing the AudioVirtualizer and VRoar VR interfaces to the North East of England and evaluating them on singers.

This project is funded by an AHRC Catalyst Grant (AH/Z506618/1) and supported by Manchester Metropolitan University, Edge Hill University and the Manchester Games Centre.