The melody, harmony and rhythm of music has always been at the center of human existence. Cultures ancient and modern have pounded drums, puffed on horns, and plucked strings to produce pleasing songs, tunes, and symphonies.
However, in the digital age, research labs are alive with the sound of music generated by artificial intelligence (AI). "It's now possible for computers to produce music that is sensible and meaningful to humans," observes Gil Weinberg, a professor of musical technology at the Georgia Institute of Technology (Georgia Tech), and director of the university's Center for Music Technology.
In fact, in the years ahead, the use of AI promises to change everything from rock concerts to advertising jingles. Already, companies are popping up that sell AI-generated music, and researchers, including Weinberg and a team at IBM, are pushing the boundaries on computational composition.
Says Janani Mukundan, an IBM researcher who specializes in applied machine learning, "Humans are extremely creative and come up with really interesting music, but computers and AI can explore sounds and different styles of music faster and differently. Technology is pushing music into a new frontier."
About a decade ago, Weinberg began exploring the concept in earnest using AI. He and a group of researchers at Georgia Tech developed a marimba-playing robot, Shimon, that writes and plays its own tunes using four arms and eight sticks.
The Georgia Tech group has fed about 5,000 different songs into deep neural nets, ranging from Lady Gaga and the Rolling Stones to Mozart and Miles Davis. Shimon used this data to learn and generate new music. The result? "You see a morphing of styles and new types of melody construction," Weinberg says.
The team at IBM has approached music on a somewhat different note. Mukundan and fellow IBM researcher and musician Richard Daskas have taught a computer to compose music using an unsupervised machine learning algorithm and an understanding of music theory concepts including pitch, rhythm, scale, and phrasing. The initiative is called Watson Beat.
"Music is the universal language," Mukundan says. "We are attempting to have the computer use an emotional cue such as 'happy' or 'sad' along with a type of music, such as Middle Eastern or reggae, and compose something new."
Remarkably, Watson composes music without ever "hearing" human-produced music. "It simply understands music theory. We give it a chord such as C major and other parameters, and it can determine over time what sounds good," Mukundan says. In fact, IBM is opening the platform to developers and musicians "so that they can create entirely new genres and new types of music," Daskas says.
Others are also pushing the musical boundaries. The Sony Computer Science Laboratory in Paris has created Flow Machines, with funding from the European Research Council. The project relies on input from professional musicians to complement AI. The result is new chorales in Bach's style, as well as entirely new pieces based on "Charlie Parker style" and "John Coltrane style." Says lab director François Pachet, "AI can be used to fill in the blanks in a clever way. It can bring more variety and creativity into compositions."
Meanwhile, U.K.-based startup Jukedeck sells computer-generated music for videos, games, and advertisements. It charges large businesses $21.99 per download and $199 to buy the copyright. The company claims its users, including the likes of Coca-Cola, have generated nearly 1 million tracks on its site. "AI opens up the possibility of creating and adapting music in real time, allowing music to be personalized to the listener," says Eliza Legzdina, operations director for Jukedeck.
Although AI and computer-generated music will almost certainly grow in scale in the coming years, researchers say that it won't replace music created by people. For one thing, "Humans are extremely good at creating music and that isn't going to change," Mukundan says. For another, "People identify with actual performers. They want to watch Mick Jagger or Lady Gaga jump around and interact with a guitar or piano," Weinberg says.
Weinberg believes the technology may soon find its way into homes, businesses, and other locations where computers and robots could generate background music. "Instead of simply streaming MP3s, a system could generate the type of music you want to listen to at any given moment." Mukundan says the technology could also be used for music therapy, and to expand the boundaries of music for individuals and others. In fact, IBM is planning to introduce a software app and APIs that allow individuals to create and share their own music through Watson Beat.
Says Daskas, "AI and computers are simply another piece of the music puzzle. Music is so vast and infinite; there are an unlimited number of possibilities to create and consume it. AI and computers expand what's humanly possible."
Samuel Greengard is an author and journalist based in West Linn, OR.
No entries found