In broad terms, I apply Artificial Intelligence and Data Science techniques to audio and music research, aiming to understand the content of individual recordings as well as large collections. This includes Semantic Audio, a field that is in the confluence of Signal Processing, Machine Learning and Knowledge Representation using Semantic Web technologies.
I'm leading QMUL's team of the EU funded AudioCommons project. Among the most novel things we're building are an ontology framework for the description of audio content and services. We're looking at developing confidence measures for audio analysis algorithms, so users can trade off precision vs. recall and retrieve content that is most appropriate for their use cases. We also work on the assessment of how using open sound content improves the creativity of professionals in game audio, music and video production use cases.
Besides AudioCommons, I'm conducting research on the Fusing Semantic and Audio Technologies for Intelligent Music Production and Consumption (FAST-IMPACt) project, leading the Production work thread. I also supervise PhD students who work in the areas of Intelligent Music Production, Deep Neural Networks for music labelling, musical gesture recogniton in expressive music performance, casual exploration of digital archives, as well as looking at the role of the user interface and 'nostalgia' in music production.
Algorithms Special Issue on Deep Learning and Semantic Technologies
I'm guest editor of the open access journal Algorithms. Scope: Sustained increase in computational capacity, advances in training and optimisation techniques and the availability of big data caused a resurgence of interest in neural networks. Deep learning opened new avenues in information extraction and processing in a wide range of application domains, including natural language processing, audio and visual object recognition and synthesis, bioinformatics, genomics, health informatics, recommendation systems and many other areas where learning effective representations from raw data or recognising small patterns amid large variations in data is beneficial. At the same time, semantic technologies including ontologies provide a well-established mechanism for structured knowledge representation and inference. They allow domain experts to construct and maintain knowledge bases, often without training data, which may be used in high-level decision-making procedures. These approaches can be distinctly complementary. They may facilitate solving problems where very complex decisions are needed, where large datasets are not yet available, or when expert knowledge can augment big data analytics. Deep learning provides the state-of-the-art in converting raw data into symbols that may be manipulated using logic. In this Special Issue, we invite original research papers and reviews related to the combination of these techniques, including new paradigms for complex reasoning over semantic structures and applications where deep learning and semantic technologies are used in tandem.
Semantic Applications for Audio and Music (SAAM2018) Workshop
I'm programme chair of the International Workshop on Semantic Applications for Audio and Music (SAAM2018) to be held in conjunction with the International Semantic Web Conference (ISWC 2018) on 9th October 2018 in Monterey, California.
SAAM is a venue for dissemination and discussion, identifying intersections in the challenges and solutions which cut across musical areas. In finding common approaches and coordination, SAAM will set the research agenda for advancing the development of semantic applications for audio and music.
JAES Special Issue on Participatory Sound And Music Interaction Using Semantic Audio
I'm guest editor of the AES journal Special Issue on Participatory Sound And Music Interaction Using Semantic Audio. After receiving nearly 30 great papers, the first volume with 9 accepted papers has now been published.
Audio Mostly 2017 at QMUL
I've been general chair of the Audio Mostly conference with ACM in-cooperation. The conference themed "Augmented and Participatory Sound and Music Experiences" was held at Queen Mary between 23-26 Aug. 2017 with over 120 attendees and a rich programme of papers, posters, demos, installations, workshops.