AI and Music: From Composition to Expressive Performance

Authors

  • Ramon Lopez de Mantaras
  • Josep Lluis Arcos

DOI:

https://doi.org/10.1609/aimag.v23i3.1656

Abstract

In this article, we first survey the three major types of computer music systems based on AI techniques: (1) compositional, (2) improvisational, and (3) performance systems. Representative examples of each type are briefly described. Then, we look in more detail at the problem of endowing the resulting performances with the expressiveness that characterizes human-generated music. This is one of the most challenging aspects of computer music that has been addressed just recently. The main problem in modeling expressiveness is to grasp the performer's "touch," that is, the knowledge applied when performing a score. Humans acquire it through a long process of observation and imitation. For this reason, previous approaches, based on following musical rules trying to capture interpretation knowledge, had serious limitations. An alternative approach, much closer to the observation-imitation process observed in humans, is that of directly using the interpretation knowledge implicit in examples extracted from recordings of human performers instead of trying to make explicit such knowledge. In the last part of the article, we report on a performance system, SAXEX, based on this alternative approach, that is capable of generating high-quality expressive solo performances of jazz ballads based on examples of human performers within a case-based reasoning (CBR) system.

Downloads

Published

2002-09-15

How to Cite

de Mantaras, R. L., & Arcos, J. L. (2002). AI and Music: From Composition to Expressive Performance. AI Magazine, 23(3), 43. https://doi.org/10.1609/aimag.v23i3.1656

Issue

Section

Articles