Time: 6:30 - 7:30pm
Venue: The Graduate Centre Queen Mary Mile End Campus 327 Mile End Road London E1 4NS
The sounds around us shape our perception of the world. In films, games, music and virtual reality, we recreate those sounds or create unreal sounds to evoke emotions and capture the imagination. But there is a world of fascinating phenomena related to sound and perception that is not yet understood. If we can gain a deep understanding of how we perceive and respond to complex audio, we could not only interpret the produced content, but we could create new content of unprecedented quality and range.
This talk considers the possibilities opened up by such research. What are the limits of human hearing? Can we create a realistic virtual world without relying on recorded samples? If every sound in a major film or game soundtrack were computer-generated, could we reach a level of realism comparable to modern computer graphics? Could a robot replace the sound engineer? Investigating such questions reveals profound and surprising aspects of auditory perception, and has the potential to revolutionise sound design and music production. Research breakthroughs concerning such questions will be discussed, and cutting-edge technologies will be demonstrated.
About Professor Josh Reiss
Josh Reiss is a Professor of Audio Engineering with the Centre for Digital Music at Queen Mary University of London. He has published more than 200 scientific papers (including over 50 in premier journals and 4 best paper awards), and co-authored the textbook Audio Effects: Theory, Implementation and Application. His research has been featured in dozens of original articles and interviews since 2007, including Scientific American, New Scientist, Guardian, Forbes magazine, La Presse and on BBC Radio 4, BBC World Service, Channel 4, Radio Deutsche Welle, LBC and ITN, among others. He is a former Governor of the Audio Engineering Society (AES), chair of their Publications Policy Committee, and co-chair of the Technical Committee on High-resolution Audio. His Royal Academy of Engineering Enterprise Fellowship resulted in founding the high-tech spin-out company, LandR, which currently has over a million and a half subscribers and is valued at over £30M. He has investigated psychoacoustics, sound synthesis, multichannel signal processing, intelligent music production, and digital audio effects. His primary focus of research, which ties together many of the above topics, is on the use of state-of-the-art signal processing techniques for professional sound engineering. He maintains a popular blog, YouTube channel and twitter feed for scientific education and dissemination of research activities.