Teresa Pelinski Ramos
Sensor mesh as performance interface
Despite steady advances in computational modelling of acoustic instrument sounds, digital instruments still lag far behind their traditional counterparts in the nuance of interaction. In typical implementations, even the most sophisticated digital instrumental models are restricted to a small number of audio inputs and outputs, or they might be controlled by the industry-standard MIDI protocol from 1983. It is thus possible to create realistic purely digital simulations of familiar instruments, but a performer's ability to play them lags far behind the acoustic original.
This project investigates nuanced, high-bandwidth interaction with instruments through a mesh of sensors spread across the object, making the whole object a locus of interaction like on an acoustic instrument. However, while this approach addresses I/O bandwidth limitations, it faces a substantial challenge in extracting meaning from the sensor signals, which will have a high degree of redundancy, with gestural information encoded through slight variations between channels. Machine learning offers a variety of approaches to dimensionality reduction and feature extraction from this input space. The goal of the project will be to develop suitable machine learning techniques to reduce redundancy and noise on a mesh of sensors and to evaluate the musical utility of the techniques through a new digital musical instrument.
Augmented Instruments Lab