Computational Models for Jazz Piano: Transcription, Analysis, and Generative Modelling
This is a broad topic within the general area of music understanding. For many years, MIR research has been task-oriented, addressing musical aspects such as tempo, beat, form, structure, and pitch content. Each aspect is addressed with little or no regard for the others, and rarely is much effort devoted to understanding "how music works". Despite the increasing availability of data and advanced algorithms for processing the data, little has been discovered about music using AI methods.
The project is to design models of music understanding with an emphasis on solo jazz piano performance, in the style of artists like Keith Jarret, Art Tatum, Clare Fischer, Kenny Barron, Gerald Clayton, etc. I believe the state-of-the-art (S.O.T.A.) in automatic transcription will allow us to model the piano performances in the symbolic domain from the raw audio data without human supervision.
C4DM theme affiliation: Machine Listening, Music Informatics