News
Artificial Intelligence and Machine Learning News

. . . And the Computer Plays Along

Software can improvise on the spot to accompany the performance of live musicians.
Posted
  1. Article
  2. Author
colorful jumble of music notes, illustration

A concert held at the Massachussetts Institute of Technology (MIT) in the fall to celebrate the opening of the university’s new museum included a performer that was invisible to the audience but played a key role in forming the melodic sound: an artificial intelligence (AI) system that responded to the musicians and improvised in real time.

In a piece from “Brain Opera 2.0,” the system starts by growling to the trumpet, then finds pitches with the trombone, becomes melodic with the sax, and ultimately syncs with the instruments by the time everyone comes in, explains Tod Machover, a music and media professor at MIT and head of the MIT Media Lab, who served as composer/conductor of the two-night concert event.

The “living, singing AI” system was designed by Manaswi Mishra, one of Machover’s Ph.D. students. “We developed a machine learning-based model that could react to musician input in real time, and then ‘fed’ this model with a vast amount of music from many countries, styles, and historic periods, as well as with all kinds of human voices making every conceivable kind of vocal sound,” Machover said. The system also drew from a vast library of percussive instruments and sounds from around the world to then improvise with the performers.

From there, the AI system reacted “independently to whatever it heard, finding a balance between imitation and innovation while modulating parameters so that different parts of the model could be employed at different moments,” Machover said. Musicians had guided improvisation with the system during Brain Opera 2.0.

Machover says he and Mishra and all the musicians and the audience “had the sense that something really new and fresh was happening; a burgeoning musical intelligence that constantly presented surprising but relevant material that inspired musicians to go places—bringing audiences with them—they would not have otherwise.”

uf1.jpg
Figure. At the TIME SPANS Festival in August 2022 in New York City, a Yamaha Disklavier piano served as an interactive “virtual improviser” during a performance of George Lewis’ Tales of the Traveller. Lewis first programmed the piece in 1987.

Interactive software that can improvise a musical performance to accompany live musicians came to the fore in 1987 with Voyager, an AI system created by visionary musician and composer George Lewis. In the decades since, some new systems have cropped up, including Wekinator, which detects instruments and their sounds and responds in real time to an action performed by a human. But Voyager remains the standard bearer.


Machover says he, Mishra, and all the musicians and audience “had the sense that something really new and fresh was happening.”


Voyager was developed on a Yamaha CX 5 computer that had a built-in synthesizer. Lewis designed software that could operate the synthesizer, instead of having to use a separate one. He debuted Voyager in the Netherlands but soon discovered it was too slow to run at 8 bits, so in the next iteration the 16-bit Atari ST was used.

Lewis is not certain how many iterations there have been of Voyager, but recalls the system went from initially improvising with 16 voices to 64 voices by the late 1990s. In 2004, it was also designed to be a pianist, debuting at Carnegie Hall using a Yamaha Disklavier, an acoustic grand piano that can be controlled by a computer. Voyager interacted with the orchestra and came up with its own music, Lewis says.

The impetus for Voyager was to understand how a computer would improvise with music and “get the computer to do something that was worth interacting with” without a human in charge, he says.

Occasionally, Voyager “does things you don’t expect and sometimes it clashes” with whatever it is playing with, but Lewis thinks that is fine. “I may think it’s not hearing what’s going on, but in fact, it was listening to what’s going on and maybe it was me,” he says. “That’s what happens when you put an autonomous agent out there and you have to accept its decision. You can try to influence its decisions, but basically … you can’t tell it what to do.”

Voyager continues to be updated by Lewis and his engineers from time to time, including for a concert in the U.K. in November 2022. For that concert, Lewis says they built a machine learning front end, an AI that is designed to understand gesture recognition from the musicians (similar to face recognition), because Voyager “is not good at recognizing simple gestures,” he says. “So it will listen to input from the musicians and try to find themes that … we taught it to recognize, and then take actions on the basis of what it recognizes.”

Lewis himself no longer plays the trombone. “As the godfather said, the Don is retired,” he chuckles.

He believes there are not many AI systems for improvising with live musicians because “in music, there has always been a bias against improvisation. There’s a notion, at least in Western music: composing good, improvisation bad,” Lewis says.


“It will listen to input from the musicians and try to find themes that … we taught it to recognize, and then take actions on the basis of what it recognizes.”


Neil Leonard, a Berklee College of Music professor and artistic director of the Berklee Interdisciplinary Arts Institute, produced a series of Lewis’ concerts using Voyager in Boston in 2021. While Leonard is not certain if Voyager was the first musical improvisation software, he says the use of it prompted a shift in music-making. “I saw this shift that had a tremendous impact on me,” Leonard says, because he had never imagined a computer would be able to improvise music on the spot.

The advent of the musical instrument digital interface (MIDI) protocol in the early 1980s enabled all electronic musical instruments to talk to one another, according to Leonard. “And from that point, everything from the cheapest synthesizers such as Casio to the most extravagant synthesizers or any computer of that time could all communicate,” he says.

The evolution of AI-based music improvisation software may also be attributed in part to the growing availability and downward price trends of microprocessors, as well as computers becoming increasingly affordable and widely available in the 1980s, Leonard says.

Then, “Self-taught computer programmers popped up all over the place,” he says. “The computers were relatively simple” to use. That set the stage for people like Leonard to realize they could not only afford to buy a computer, but teach themselves to program it and up their game, he says.

“So I think George was really … [in] the first wave of self-taught computer programmers who had really fascinating musical ideas that got people very excited,” Leonard says.

When Lewis created Voyager, “you pretty much had to pick a computer language and write it all yourself,” he added. Today there are widely used languages, most notably, Max, which Leonard credits with making the entry into creating interactive/improvisational music much easier.

In fact, Lewis says the current version of Voyager is written in Max. He is currently working with programmers using the TensorFlow machine learning platform and is trying to connect the two.

Another popular option for creating interactive software to generate/improvise music on the spot is an app called iReal, which acts as a virtual band and has moved into the mainstream, Leonard says.

“I bet half of the 5,000 students at Berklee have iReal on their phone … So what was once very uncommon in music school is now very, very common,” he says.

It is not easy to buy software that does what Voyager set out to do, Leonard notes, explaining that “Voyager is tailor-made for one person’s idea for how a computer should tailor music.” Creating software is part of the artistic process, so musicians—himself included—must create software to work the way they want it to be involved with the music.

With Lewis as his inspiration, that is exactly what Leonard has done: develop his own musical improvisation software to improvise with him in real time. He spent 12 years touring the world playing with it and also released a CD of that music called Timaeus.

Emilio Guarino, a freelance musician and producer and founder of Glitch Magic, which develops music production tools, was one of the 10 musicians who performed at the MIT fall concerts, where he played the double bass.

“At predefined moments during the [AI] piece, Tod’s score [gave] us instructions of how to play,” such as whether to play wildly irregular rhythms, arpeggio fragments, and so forth, Guarino recalls. The pitches were sometimes indicated, but in other sections, the musicians were totally improvising.

Microphones were used to analyze these improvisations in real time, then play back a response with which the musicians could interact and improvise. “The response created by the AI system is based on very, very large sets of sounds … that it composites together to generate a response,” Guarino says.

It was always surprising to hear what the AI system came up with, he says. Sometimes, it echoed the notes or rhythms the musicians played with very different timbres, or sometimes it responded with “something completely weird and abstract,” Guarino says. “So far, I haven’t heard it do the same thing twice, so it’s very interesting to improvise with.”


Creating software is part of the artistic process, so musicians must create software to work the way they want it to be involved with the music.


Alex Inglizian, co-director/technical director of Chicago-based non-profit Experimental Sounds Studio, has also developed interactive music software using Max. He says he was always attracted to improvisation.

“George builds software that is reactionary and listens to music and responds in different ways,” Inglizian says. “I mostly build instruments that allow performers to improvise with the computer as an instrument.”

When he began studying computer programming and experimental music around 2003, by then Voyager “was very vintage technology,” Inglizian says. “The quality of the sound and its capabilities were very dated because computers were so much more powerful, but I’m definitely inspired by these earlier composers like George because the ideas and ways he talks about [interactive music software] are still relevant.”

For Machover, “Voyager sounds as fresh as it did in 1987.” However, he adds, “That said, so many things have developed since then. George’s systems generally measure MIDI information, breaking musical performances down to the essentials of musical language.”

Today, there are many new techniques that are used to measure performance “feeling” and intention in broader ways, “through audio and gesture analysis, emotional characterization, description of musical tension and direction,” Machover says.

Echoing Leonard, Machover adds that Voyager was based on musical rules derived from Lewis’ own music, meaning that the system sounded appropriate in response to when George or someone familiar with his style was playing. This was not necessarily the case when very different music was introduced, he says.

That has all changed, and today’s systems “can be based on much vaster collections of music and sound, can extract more generalized assumptions from musical performance or improvisation, and can adapt more fluidly as a session develops,” Machover says.

“This means that results from newer systems can adapt more easily to any performer, both by fitting astonishingly well to whatever is ‘heard’ to pulling the player somewhere really unexpected, but often rewarding, exciting and even revelatory.”

*  Further Reading

Bainbridge, L.
Ironies of automation. New Technology and Human Error, J. Rasmussen, K. Duncan, J. Leplat (Eds.). Wiley, Chichester, U.K., 1987, 271–283.

Hernandez, J. and Rubin, S.
Human-Machine Interactive Composition Using Machine Learning. University of California at Berkeley.

Bresson, J. and Chadabe, J.
Interactive Composition: New Steps in Computer Music Research. 2017

Eigenfeldt, A.
Real-time Composition or Computer Improvisation? A composer’s search for intelligent tools in interactive computer music. 2007. School for the Contemporary Arts, Simon Fraser University, Burnaby, BC, Canada.

Robertson, A. and Plumbley, M.D.
2006. Real-time Interactive Musical Systems: An Overview.

A sampling of the MIT AI system in action during that segment of Brain Opera 2.0 https://www.dropbox.com/s/ev798puhcumgcsa/BrainOpera2dot0-ai-v2.mov?dl=0

Experimental Sound Studios’ YouTube channel has a four-part documentary all about improvisation, including the role of George Lewis and Voyager. https://youtube/2jDMkXDieNs

Lewis, G.
Voyager https://binged.it/3lmbcdG

Lewis, G.:
“Voyager” Live at the Bang on a Can Marathon 5/3/20 https://www.youtube.com/watch?v=ncy4_FHX3Jc

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More