Introduction and links to software projects I worked on.
One of the earliest things I've done to learn about Web development as well as to show how Semantic Web technologies can be beneficial to audio research was a tool called Sonic Annotator Web Application (SAWA). You can find more info about this on the SAWA pages together with related applications built around it.
Among my more recent projects is MoodConductor, a system that enables gathering feedback from the audience during live music performances. I also created a Python wrapper plugin called VamPy which allows Vamp audio analysis plugins to be written in Python. This plugin features an automatic type inference mechanism (source) which allows seamless interaction between dynamically typed and statically typed languages. Please use the links on the right to read more about these, or see some of my current work below.
Two of my latest works include a FAST demo called SemanticDAW which embeds audio feature extraction into the digital audio workstation using Semantiuc Web technologies, as well as a web application for navigating cultural artefacts and audio archives associated with the band Grateful Dead.
FAST Semantic DAW concept
One of the demonstrators themes of the Fusing Semantic and Audio Technologies for Intelligent Music Production and Consumption (FAST-IMPACt) project is developing a Semantic Digital Audio Workstation (DAW) software, which I helped to design and build. The idea is to help audio engineers navigate a project by displaying audio features extracted from the audio content in the DAW. We created a demo usiung the open API that comes with the popular REAPER DAW. This connects the DAW to my cloud feature extractor SAWA using Semantic Web technologies and displays the results of instrument identification (using a Deep Neural Network trained on Apple Loops), key extraction, tempo detection and other features. The most appropriate features to extract are determined through logical inference. A demo video of the software can be seen below.
Grateful Dead Live
In another FAST demonstrator theme I worked on the design of a Semantic Web based architectecture for navigating cultural archives related particularly to the Grateful Dead. This choice is motivated by the continuous scholarly interest in the band's history, regarding both their music, as well as their cultural impact. The application can be used to navigate live recordings and experience them in conjunction with trivia and memorabilia, such as images or scans of artefacts.
Relevant material is gathered and made accessible through a common platform using Semantic Web technologies and then Semantic Audio technology is used to explore the material in a meaningful way. This is based on links established between cultural resources and audio content. A specific data model is developed for this purpose based on and incorporates existing ontologies that specialise on the representation of audio feature data. The application facilitates an audio-visual exploration of the gathered material, juxtaposing audio, text, images, and artefacts about specific moments in the history of the Grateful Dead. The combination of these resources and technologies provide new experiences for Grateful Dead fans and empower music communities at large.
I organised a workshop at the 2017 New York AES convention to explore what practitioners at cultural archives and experts in audio research think of this and similar applications. Pleae see the following publications for more detail:
Fazekas, G., Wilmering, T. (2017). The Music Never Stopped: The Future of the Grateful Dead Experience in the Information Age. Audio Engineering Society 143rd Convention, New York, NY, USA, October 20, 2017. (link)
Technical details of this demo has been publised in Wilmering, T., Thalmann, F., Fazekas G., Sandler M. (2017) "Bridging Fan Communities and Facilitating Access to Music Archives Through Semantic Audio Applications" in Proc. 143 Convention of the Audio Engineering Society, (e-Brief) Oct. 18-12, New York, USA [PDF]