Motivated music : automatic soundtrack generation for film
Automatic music composition is a fast-moving field which, from roots in serialism, has developed techniques spanning subjects as diverse as biology, chaos theory and linguistic grammars. These algorithms have been applied to specific aspects of music creation, as well as live performances. However, these traditional approaches to generation are dedicated to the creation of music which is independent from any other driving medium, whereas human-composed music is most often written with a purpose or situation in mind. Furthermore, the process of composition is naturally hierarchical, whereas the use of a single algorithm renders it a monolithic task. In order to address these issues, a model should be able to encapsulate a sense of composer motivation whilst not relying on a single algorithm for the composition process. As such, this work describes a new framework with the ability to provide a means to generate music from film in a media-driven, distributed, manner. This includes the initial annotation of the media using our new OntoMedia ontology; the mapping of annotated information into parameters suitable for compositional algorithms; the design and implementation of an agent framework suitable for the distribution of multiple composing algorithms; and finally the creation of agents capable of handling the composition of musical elements such as rhythm and melody. In addition, a case study is included which demonstrates the stages of the composition process from media annotation to automatic music generation.