Virtual Placement of Objects in Acoustic Scenes
As Augmented Reality experiences are growing in importance, and the cost of the technology falls, it is increasingly of interest to develop advanced ways to insert "auditory objects" within mixed virtual-real scenes. Examples of auditory objects include musical instruments, humans speaking, gun shots, animal sounds, and so on. This approach has enormous potential to increase immersion in music consumption in films, games and streamed content.
The advantage of this over current approaches is that the virtual objects will have realistic dispersion characteristics and will interact acoustically (think: reverberation) as if they are really present in the physical space they are being rendered into. One exciting possibility is for new ways to enjoy live music concerts streamed to the home. There is potential to combine this technology with audio up-mixing, to enhance e.g. legacy stereo recordings for immersive 3D sound using Deep Learning based Source Separation.
The project will span a number of topics and technologies, including audio capture (incl. spatial audio capture) using object-based audio formats and spatial rendering methods. Especially novel are Deep Learning techniques to design filters for controlling the placement and dispersion of instruments. Deep Learning also underpins the approach to dereverberation and related acoustic effects, and for perceptually relevant evaluation and validation of methods.
C4DM theme affiliation:
Audio Engineering, Sonic Interaction Design, Machine Listening, Sound Synthesis.