Research and Advances
Computing Applications Music information retrieval

Variations2: Retrieving and Using Music in an Academic Setting

University music students, teachers, and researchers discover and retrieve musical works and navigate within them, then create annotations and share them with other users.
Posted
  1. Introduction
  2. Discovering Music
  3. Variations3
  4. Conclusion
  5. References
  6. Authors
  7. Footnotes
  8. Figures
  9. Tables

A music graduate student 10 years from now might want to compare several performances of Johannes Brahms’s Piano Concerto No. 2 in B-flat Major. He’ll be able to turn to his computer, open a music search tool, and type “brahms” in the composer field and “concerto” in the title field. Scanning the search results, he’ll see the work he wants and click on it, generating a list of all available recordings and scores of that work. He’ll select recordings of three performances, along with an encoded version of the score, and create bookmarks for each of them. He’ll instruct the system to synchronize each recording with the score, then use a set of controls that allow him to play back the piece and view the score, cycling among the three performances on the fly.

To help him navigate within the piece, he’ll create form diagrams for each of its four movements by dividing a timeline of each movement into sections and grouping the sections into higher-level structures. He’ll then use the timeline to move around within the piece, comparing the performances and storing his notes as text annotations attached to individual time spans. To find a particular section he’s interested in, he might play a sequence of notes on a musical instrument digital interface (MIDI) keyboard attached to his computer, prompting the system to locate the sequence in the score. When he finishes, he’ll export the timelines as an interactive Web page and email the page to his professor for comment.

This scenario illustrates but one example of the promise that music information retrieval technology offers not only for casual music listeners but for scholars, teachers, students, and performers of music as well. These expert users of music require systems with more sophisticated capabilities for discovering, organizing, and using music content than are found in typical music retrieval systems, including Apple Computer’s iTunes (www.apple.com/itunes). Several software platforms focus specifically on music information retrieval in an academic context (such as the Humdrum toolkit [5]), commercial subscription content services (such as the Classical Music Library, www.alexanderstreet.com/products/clmu. htm), and the Variations2 digital music library system (variations2.indiana.edu/ research) developed at Indiana University. Here, we describe the Variations2 system in terms of its current functionality and potential as a platform for integrating the technologies that could eventually enable such scenarios.

In 1996, the Cook Music Library at Indiana University implemented the Variations system [4] to improve access to sound recordings from its vast and growing collections to the students and faculty of IU’s Jacobs School of Music in Bloomington. One of the first online music distribution systems, Variations served up digitized sound recordings via streaming audio over an asynchronous transfer mode network. As of early 2006, it could make available approximately 11,000 recordings to IU users.

In 2000, the IU Office of the Vice President for Information Technology received a National Science Foundation grant to create a new digital music library system, known as Variations2, as a next-generation version of Variations for research in digital library system architecture, metadata, network services, usability, intellectual property rights, and music pedagogy. Though originally developed as a research system, Variations2 was deployed in production in May 2005 to replace the original Variations system. In addition to recordings, Variations2 provides online access to both scanned and encoded musical scores. It was implemented as a Java client-server application, with a Java Swing client communicating with the server through Java remote method invocation. The client is loosely Web-integrated by way of its ability to launch as a helper application when a user clicks on a link to Variations2 content.

Back to Top

Discovering Music

The IU music curriculum focuses on a core repertoire of Western art music. One especially important characteristic of this corpus, often referred to as “classical” music, is its focus on musical work as an abstract notion, with various performances and printed editions over time. Variations2 is based on a relational metadata model [8] focusing on the work that manifests itself as an instantiation (a particular recorded performance or score edition) (see Figure 1). The instantiation appears on a container, or physical recording (such as an album) or score. Multiple instantiations might appear on a single container. The digitized version of the container delivered to users in Variations2 is called a media object.

Works, instantiations, and containers can each involve contributors, individuals, or groups responsible for its creation. The composer is responsible for a work, while a performer, conductor, or editor is responsible for the instantiation of that work. While the Variations2 model is designed to meet the needs of classical music, it is based in part on the library community’s Functional Requirements for Bibliographic Records model [6] and is an example of the general trend in libraries toward entity-relationship modeling for resource description.

The separation of entities in the Variations2 metadata model enables more powerful and precise metadata-based searching for classical music than is found in many other music search systems [10]. The basic search window (see Figure 2) allows users to enter the name of a creator/composer of a musical work, the name of a performer, the name of the work itself, the work’s musical key, and/or the type of media on which the work appears. An advanced search tab adds more detailed options (such as the ability to search by subject heading or publisher).

In the lower section of the search window, results are displayed as hyperlinks allowing users to drill down to an actual recording or score. Clicking a blue information icon opens a new window with detailed information about the associated record in the database.

Variations2 employs a disambiguation process to aid users in finding instantiations of a specific work. If the user’s query matches a single work, as in a search for the title “moonlight sonata,” the system immediately presents a results list that includes all instantiations (recordings and scores) of the work. When the query matches multiple works, the user is asked to disambiguate the query by choosing from works matching the given criteria. Each work in the system can have alternate titles, any of which may match the search terms. Matched alternate titles appear beneath the primary title in the result set, as in Figure 2. To further aid disambiguation, works may also be accompanied by a graphical representation of their primary musical theme.

Disambiguation can involve multiple steps (see the table here), the sequence of which depends on the query fields the user completes, as well as the contents of the library, but typically involves only one or two steps and never more than four steps. Details on the machinery underlying Variations2 search function are in [12].

In the real world of user needs, music retrieval is not an end in itself. For this reason, the Variations2 client software includes a rich set of tools for examining, analyzing, and annotating music. The audio player (Figure 3, upper left) supports basic listening and bookmarking tasks. The hierarchical structure of longer classical works is represented in the tree-based display of tracks, so movements are listed within their parent works. Listeners bookmark and annotate locations of interest within recordings.

The Variations2 timeline tool (Figure 3, bottom) supports detailed formal analysis of works, tracks, and movements. It allows students and instructors to add timepoints between sections, creating bubbles representing multi-level musical structures. These bubbles can contain labels and annotations that appear in the annotation panel synchronized with audio playback, providing navigation and visualization.

Custom playlists are a common feature of commercial music tools (such as iTunes). However, playlists in Variations2 (Figure 3, upper right) offer unique pedagogical features not typically found in other systems. For example, the beginning and endpoint of any track sent to a playlist can be adjusted to customize the segment included. The listening drill tool supports students studying for listening tests. They can set up the drill in multiple-choice, fill-in-the-blanks, or flash-card format, specifying which library metadata fields to use for the drill or entering custom metadata to be quizzed. The listening drill tool randomly selects tracks from the playlist, beginning playback either at a random offset within the track or only at specified locations.

Variations2 displays scanned score images in a viewer window that supports bookmarking, zooming, and single- and two-page views. If sufficiently detailed metadata exists for both a score and a recording of a work, Variations2 is able to automatically turn the pages during audio playback. This synchronization makes it easy to jump to the same location in both the score and the recording simultaneously.

The score viewer also supports annotation. In addition to textual and graphical markup (such as highlighting and drawing), Variations2 annotation tools include drop-down list widgets of labels commonly used in music analysis. They can be positioned by the user within the score and used in the classroom for analytical discussions and as assignments; annotated pages can be saved or printed.

Timelines, playlists, and score annotations are all stored as local XML data files that reference media objects in the digital library. Bookmarks and timelines may also be exported as HTML Web pages.

Back to Top

Variations3

A 2005 grant from the U.S. Institute of Museum and Library Services (www.imls.gov/) supports use of the Variations2 software at other institutions, initially three or four test sites within the U.S., with the goal of having a freely available, open-source version of the Variations2 software, to be known as Variations3. Variations3 software will feature more Web integration (through browser-based interfaces and functionality exposed via Web services) and database independence. Variations3 also involves three additional goals:

  • Assess metadata benefits. More formally assess the benefits of the project’s metadata model for music discovery by users and find ways of making metadata creation more cost-effective;
  • Bring in licensed content. Work with providers of licensed music content so their music can be used with the Variations2 search and pedagogical tools; and
  • Sustain software development. Identify a workable plan for sustaining software development and support, perhaps through a consortium or community source effort like that of the Sakai project (www.sakaiproject.org), which is building an online collaboration and learning environment for higher education.

In Variations2, metadata elements are populated from records in IU’s existing online library catalog, supplemented by manually entered information. However, the effort required to create these records is unsustainable, and the project’s researchers are investigating four main methods for streamlining metadata creation for Variations3:

  • Identify musical works. Improve identification of musical works from the container-centric records in the existing online library catalog and provide better mappings from these records to Variations3;
  • Share among institutions. Share records for contributors, works, instantiations, and containers among institutions implementing the system, building on libraries’ tradition of cooperative cataloging;
  • Integrate metadata from other sources. Integrate music metadata from sources outside the library community, including commercial databases (such as Gracenote, www.gracenote.com) and community-based efforts (such as MusicBrainz, www.musicbrainz.org); and
  • Accept user-contributed metadata. Make it possible for users to contribute their own knowledge to the system. Record labels and music enthusiasts create and use a great deal of metadata the Variations3 system might be able to leverage to assist music discovery by users. Variations3 researchers plan to pursue several methods of integrating user-contributed metadata into the system, ranging from a fairly open Wikipedia-style model (www.wikipedia.org) to allowing users to send library staff email messages suggesting additions or changes to existing metadata.

Based on experience with Variations2, preliminary plans for Variations3 include searching and browsing on additional data elements, including instrumentation and musical genre. We also hope to improve discovery by recording and using known relationships between musical works. Further enhancements associated with Variations3 search might also include better support for the discovery of music outside the classical canon for which the metadata model was designed. While a metadata model built around the abstract notion of a particular work is effective for certain types of music, it is less effective for others (such as jazz, popular music, and non-Western music). Alternative models or expansion of the existing model may be necessary to provide equally robust search for genres outside the current Variations focus.


Synchronization makes it easy to jump to the same location in both the score and the recording simultaneously.


Many information retrieval needs of music experts are satisfied through metadata-based searching, but content-based music retrieval provides additional means of accessing music and has improved greatly over the past decade. The Variations2 system was designed to be modular, allowing the addition of connections to content-based searching. For example, a demonstration done in collaboration with a team from the University of Michigan developed Variations2 VocalSearch, or V2V, a prototype of content-based searching integrated with Variations2 [1]. Created by Michigan’s MusArt project [2], VocalSearch takes an audio query in the form of a musical theme sung or hummed by a user, matches it against a database of themes, and generates a list of matching works ranked by similarity. In V2V, VocalSearch writes this list to an XML file and sends the file to Variations2. The user may then access Variations2 content related to each work in the result set and take advantage of all Variations2 tools, including sound-score synchronization and the Timeliner.

The V2V XML file format makes it easy to substitute another content-based retrieval system of the typical document-level kind. Integrating Variations2 with a passage-level retrieval system (such as NightingaleSearch [3]) for finding the locations of all matches might also be possible.


An Environment in which users find and make appropriate use of music is the ideal environment for scholarship and creativity.


Variations2 synchronized playback and navigation of scores and sound recordings is a timesaver for anyone studying a particular piece of music. But synchronization depends on metadata created through a tedious manual process of tapping the keyboard measure-by-measure while listening to a sound recording and entering starting measure numbers for each score page. Several researchers are working on audio-score alignment techniques [9, 11] that may make it possible to generate synchronization information automatically from audio and encoded score data (possibly created through optical music recognition [7]), either upon ingestion of content into a library or on the fly. We hope to integrate one or more automated alignment systems into Variations3.

Back to Top

Conclusion

Variations2 represents a first step toward an integrated music information system for university music students, teachers, and researchers. It includes a solid framework and tools for their basic music discovery needs. It also may evolve into a platform for testing music information retrieval tools and algorithms for effectiveness in the real-world tasks of expert music users. Music retrieval alone achieves little. An environment in which users find and make appropriate use of music is the ideal environment for scholarship and creativity.

Back to Top

Back to Top

Back to Top

Back to Top

Figures

F1 Figure 1. Example Variations2 metadata model.

F2 Figure 2. Variations2 search window.

F3 Figure 3. Variations2 audio tools.

Back to Top

Tables

UT1 Table. Example Variations2 search disambiguation.

Back to top

    1. Birmingham, W., O'Malley, K., Dunn, J., and Scherle, R. V2V: A second variation on query-by-humming. In Proceedings of the Third ACM/IEEE-CS Joint Conference on Digital Libraries (Houston, May 27–31). IEEE Computer Society, Washington, D.C., 2003, 380.

    2. Birmingham, W., Pardo, B., Meek, C., and Shiffrin, J. The MusArt music-retrieval system: An overview. D-Lib Magazine 8, 2 (Feb. 2002); www.dlib.org/.

    3. Byrd, D. Music-notation searching and digital libraries. In Proceedings of the First ACM/IEEE-CS Joint Conference on Digital Libraries (Roanoke, VA, June 24–28). ACM Press, New York, 2001, 239–246.

    4. Dunn, J. and Mayer, C. Variations: A digital music library system at Indiana University. In Proceedings of the Second ACM Conference on Digital Libraries (Berkeley, CA, Aug. 11–14). ACM Press, New York, 1999, 12–19.

    5. Huron, D. Music information processing using the Humdrum Toolkit: Concepts, examples, and lessons. Computer Music Journal 26, 2 (Summer 2002), 15–30.

    6. International Federation of Library Associations Study Group on the Functional Requirements of Bibliographic Records. Functional Requirements for Bibliographic Records: Final Report. K.G. Saur, Munich, 1998; www.ifla.org/VII/s13/frbr/frbr.pdf.

    7. MacMillan, K., Droettboom, M., and Fujinaga, I. Gamera: Optical music recognition in a new shell. In Proceedings of the 2002 International Computer Music Conference (Gothenburg, Sweden, Sept. 16–21). International Computer Music Association, San Francisco, 2002, 482–485.

    8. Minibayeva, N. and Dunn, J. A digital library data model for music. In Proceedings of the Second ACM/IEEE-CS Joint Conference on Digital Libraries (Portland, OR, July 14–18). ACM Press, New York, 2002, 154–155.

    9. Müller, M., Kurth, F., and Roder, T. Towards an efficient algorithm for automatic score-to-audio synchronization. In Proceedings of the Fifth International Conference on Music Information Retrieval (ISMIR 2004) (Barcelona, Spain, Oct. 10–14). Universitat Pompeu Fabra, Barcelona, Spain, 2004, 365–372.

    10. Notess, M., Riley, J., and Hemmasi, H. From abstract to virtual entities: Implementation of work-based searching in a multimedia digital library. In Research and Advanced Technology for Digital Libraries: Proceedings of the Eighth European Conference (ECDL 2004) (Bath, U.K., Sept. 12–17). Springer-Verlag, Heidelberg, 2005, 157–167.

    11. Raphael, C. A hybrid graphical model for aligning polyphonic audio with musical scores. In Proceedings of the Fifth International Conference on Music Information Retrieval (ISMIR 2004) (Barcelona, Spain, Oct. 10–14). Universitat Pompeu Fabra, Barcelona, Spain, 2004, 387–394.

    12. Scherle, R. and Byrd, D. The anatomy of a bibliographic search system for music. In Proceedings of the Fifth International Conference on Music Information Retrieval (ISMIR 2004) (Barcelona, Spain, Oct. 10–14). Universitat Pompeu Fabra, Barcelona, Spain, 2004, 489–496.

    This material is based on work supported by the National Science Foundation under Grant No. 9909068 and by a National Leadership Grant from the Institute of Museum and Library Services.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More