AI-assisted FM synthesis for sound design and control mapping
Frequency Modulation (FM) synthesis is a well-known technique that is used to create interesting timbres at a low computational cost. However, it fell out of use mainly because it is dificult to control and due to its undesired synthetic-sounding quality. Recent FM commercial products have seen a resurgence, but they still rely on dated design paradigms, presenting the same limitations. Scaling up the architecture to improve it seems to be unfeasible due to the increase in complexity it would entail.
On the other end of the spectrum, Deep Neural Networks (DNNs), widely employed as classifiers, have been recently used in a generative scheme to produce credible musical instrument samples. Nevertheless, the high computational cost and the lack of explainability make them dificult to validate and sonically bounded to the dataset they were trained with.
The proposal consists of overcoming the limitations of the mentioned methods by pairing an extended FM architecture with a DNN that can regress or describe natural-sounding spectra in terms of explainable parameters using the synthesizer framework. A successful implementation of such an algorithm could also allow a user to control the synthesizer in creative ways. Moreover, the approach can be extended to accept gestural control strategies or even musical instrument transformation.
C4DM theme affiliation:
Sound Synthesis and Augmented Instruments