Character-based adaptive generative music for film and video games using Deep Learning and Hidden Markov Models
Music is an essential element of audio visual media, such as films and games. It hugely contributes towards offering the audience an immersive experience by establishing setting, enhancing the storyline and often helping develop the characters. A common technique amongst composers to progress the overall narrative is to write short melodic passages, or themes, based on the character's personality.
The creation of adaptive music for audio visual media is a process that involves film and sound editors, audio programmers and composers. For video games, it is done by sectioning the pre-written tracks in different levels of intensity and using software such as FMOD, a sound effects and audio engine to make the game audio adaptive, to trigger the next level of intensity based on in-game cues.
This research aims to create a model that takes the original theme written by the composer, uses HMMs to provide the basic compositional framework and Long Short Term Memory (LSTM) neural networks to ensure the musical material created provides continuity and maintains a similar style.
C4DM theme affiliation:
Computational Creativity, Generative Music.