In the early 1960s, Barry S. Brook created a standardized form that RILM’s New York offices used to collect abstracts. Printed on thin, color-coded paper, these forms enabled RILM’s first editors to organize and edit submissions efficiently. Each color represented a different language: green for German, yellow for English, orange for Italian, pink for Spanish, and red for Russian. RILM distributed these forms worldwide. As Executive Editor Zdravko Blazeković recalls, the forms were a familiar sight on university campuses around the globe–he first encountered them as a graduate student in Zagreb, long before he later joined RILM in New York.
Editors first filled out the paper forms by hand before transferring the information into an IBM SG360 computer. Used from 1965 to 1978, the SG360 was the first family of computers designed to support both commercial and scientific applications, offering models that ranged from small entry-level systems to large mainframes. The early data-entry program the editors worked with was WYLBUR, a text editor and word processor introduced in 1967. Beyond RILM, WYLBUR was also used at institutions such as the Stanford Linear Accelerator Center (SLAC), the European Organization for Nuclear Research (CERN), the U.S. National Institutes of Health (NIH), and numerous other sites.
One of the initial challenges RILM faced in 1967 was developing a numbered classification system that would allow for a logical and effective organization of abstracts within each issue. In addition, it was crucial to establish a method for creating see-references and cross-references to help readers find related information across different sections. After extensive investigation, comparison, and consultation, the RILM classification system was established and proved to be highly effective, particularly for Western literature. Early RILM classification numbers were paired with the RILM number itself, providing essential information for indexing and referencing. For example, a number like 67/177ap26 indicated that the abstract was from 1967 (67), with 177 referring to the specific entry, “ap” denoting the type of item (in this case, an article in a periodical), and the superscript number (26) signifying the RILM classification, which in this example, related to the Classical period.
The classification number allowed early users of RILM Abstracts to quickly assess whether they were interested in a particular record. However, by the time the fourth issue of RILM Abstracts was published, it became clear that the subject index needed to be based on a more structured intellectual framework. The computer-generated indexes of the early 1960s were overly simplistic, took up too much space, and often wasted the reader’s time. To address this, RILM set out to create an efficient and user-friendly index that merged both authors and subjects into a single alphabetical list, providing enough detail to help users quickly locate the information they sought. In doing so, RILM effectively combined human expertise with machine-assisted techniques, striking a balance that leveraged automation while retaining the flexibility and nuance of human editorial control.
A 1987 search form for Dialog Information Services, an information retrieval service established in 1966 as the first global system of its kind. It was designed for and used primarily by researchers. The form was designed to create a search strategy.
While the production of RILM Abstracts has consistently depended on computing technology, the systems available in the 1960s and 1970s were unable to fully support its multilingual and multicultural mission. Even the advanced IBM System/370 mainframe, employed between 1970 and 1988, offered only limited functionality for rendering diverse fonts, writing systems, and diacritical marks. From its founding in 1966, however, RILM prioritized the accurate representation of names and terms–including their display in original scripts–as a central objective.
During the 1960s and 1970s, RILM’s Soviet national committee made significant contributions by supplying many records of Russian-language publications. Because the IBM System/370 mainframe could not render authors’ names and titles in Cyrillic script, RILM editors turned instead to the IBM Selectric typewriter, introduced in 1961. The Selectric quickly became a commercial success, with IBM receiving four times the anticipated number of orders within its first year.
IBM Selectric’s typeballs.
The Selectric’s distinctive typeball–a rotating mechanism resembling a golf ball–improved both typing efficiency and the visual quality of text. Its capacity to switch between multiple fonts and alphabets within seconds anticipated the flexibility of later word processors and personal computers. For RILM editors, the interchangeable typeball served almost as an automated transliteration tool: by installing a Cyrillic typeball, they could generate Russian texts while using a standard Roman-letter keyboard.
In July 1965, RILM’s founder, Barry S. Brook, was conducting research in Europe when he attended the International Association of Music Libraries (IAML) congress in Dijon. During the congress, he introduced his ambitious idea of creating an international bibliography of music literature, which he had already named “RILM”. Brook emphasized the transformative potential of using computers for music documentation–an innovative concept at the time. According to Brook, even note-taking would become unnecessary as “any page passing . . . on the screen can immediately be reproduced in paper form or be recalled at will later. We may even dare dream of that famous little black box in which the entire contents of the Library of Congress or of the Bibliothèque Nationale, or both, are stored in speedily recallable form.” Brook envisioned a system where scholars engaged in specific research projects could request bibliographic searches from a computer database and receive automatically generated printouts in response. This forward-thinking approach laid the groundwork for what would become a foundational resource in music scholarship worldwide.
Barry S. Brook in Europe, mid-1960s.
Recognizing that RILM was too small an organization to carry out its ambitious goals alone, Brook reached an agreement with Lockheed Research Laboratory in Palo Alto–a division of Lockheed Missiles and Space Company–to assist in data distribution. Through this partnership, RILM’s bibliographic data could be transmitted via telephone lines, a remarkable innovation given that this took place more than 30 years before internet technology became commercially available.
IBM mainframe computer, 1964. Photo courtesy of IBM.
RILM employees at their computers in 1992.
Following the founding of RILM Abstracts, it quickly became evident that its production depended heavily on computing technology. However, the computing capabilities of the 1960s and 1970s were not fully equipped to handle the complexities of RILM’s multilingual and multicultural mission. Even the powerful IBM System/370 mainframe (pictured in the first image above)–used in RILM’s production from 1970 to 1988–had significant limitations in rendering diverse fonts, writing systems, and diacritical marks. Yet from its inception in 1967, RILM was committed to representing names and terms in their most accurate and original forms, including their native scripts. To meet this standard, RILM editors often relied on a much simpler tool: the IBM Selectric typewriter, which allowed for manual switching between typeballs to produce various fonts and writing systems that the mainframe could not yet support.
The creation, distribution, and deployment of the Adaptive Use Musical Instrument (AUMI) software represents a project that redefines our understanding of music—its creation, its meaning, and who can make it. AUMI also serves as a broader invitation to embrace innovative thinking beyond the realm of music, challenging traditional notions of normativity, difference, and democratic social relations. The existence of AUMI and the new social dynamics it encourages underscore the significant influence of disability rights and justice advocates, highlighting their impact across diverse social and cultural spheres.
As a digital instrument available for free download, AUMI fosters democratic access to music making. It allows individuals who were previously excluded from composing and performing music to generate a wide range of sounds by controlling a visual cursor through eye, head, hand, and body gestures. AUMI’s technology can track even the smallest body movements, such as eye or chest movements from breathing, enabling users with limited voluntary mobility to create notes, chords, rhythms, and melodies using an apparatus that registers the slightest degrees of motion. When programmed to reduce sensitivity to motion, AUMI supports music composition and performance by individuals with active involuntary movements. This adaptability has significant implications for disability and social justice, highlighting its broader impact on inclusivity and access.
A demonstration video of AUMI.
By overcoming the limitations of outdated technologies and conventions, AUMI creates new opportunities for a diverse range of individuals to engage in music-making. It paves the way for the creation of innovative musical sounds and fosters new social connections among musicians. In doing so, AUMI frees artistic expression from the physical and social constraints that have defined Western art music, allowing for creativity beyond conventional norms.
Instead of perceiving disability as an embarrassing impairment or a deficiency to be fixed or reluctantly accommodated, the disability rights movement embraces the value and potential difference. It draws attention to the harm caused not just to individuals, but to society, by narrow definitions of normalcy and normativity. Disability activism reveals how framing able-bodiedness as the standard leads to artificial, arbitrary, and irrational exclusions that misallocate resources, waste talents, and stifle creativity, invention, and innovation.
This according to “AUMI as a model for social justice” by George Lipsitz, Improvising across abilities: Pauline Oliveros and the Adaptive Use Musical Instrument, ed. by Thomas Ciufo, Abbey L. Dvorak, Kip Haaheim, et al. (Ann Arbor: University of Michigan Press, 2024, 47–63; RILM Abstracts of Music Literature, 2024-9475).
The garment is a body instrument that emits musical sounds when the wearer moves in it, as well as triggering a haptic vibration response. It emulates the vibrations that are felt while a musician plays an instrument, and the emotional response that the musician and a performer such as a dervish feels.
The construction of the dress involves a variety of sensors that perform according to how the sound is triggered by the movement of the wearer. These determine the output based on the rotation of the dress using gyroscopes, accelerometers that measure the speed of the dress as it is turning, and flex sensors that trigger sounds when the arms are in certain positions.
The sound design component relies on organic sound samples of the classical Turkish ṭanbūr recorded by a musician and manipulated in computer music design software. This gives the garment a unique edge by functioning as a computer digitized representation of an instrument that is activated by motions of the body. The sounds are triggered using algorithms created in Max Cycling ’74 software. These patches will detect a threshold of movement by the wearer before a sound is triggered.
This according to “Dervish sound dress: Odjevni predmet sa senzorima koji emitiraju zvuk i haptičkim odzivom/The dervish sound dress: A garment using sensors that emit sound and haptic feedback” by Hedy Hurban, an essay included in Muzika–nacija–identitet/Music–nation–identity (Sarajevo: Muzikološko Društvo Federacije Bosne i Hercegovine, 2020).
Video documentation of the dervish sound dress is here.
Gamelunch is a sonically augmented dining table that exploits the power and flexibility of physically-based sound models towards the investigation of the closed loop between interaction, sound, and emotion.
Continuous interaction gestures are captured by means of contact microphones and various force transducers, providing data that are coherently mapped onto physically-based sound synthesis algorithms. While performing usual dining movements, the user encounters contradicting and unexpected sound feedback, thus experiencing the importance of sound in the actions of everyday life.
SMUG is a system for generating lyrics and melodies from real-world data, in particular from academic papers.
The developers of SMUG wanted to create a playful experience and establish a novel way of generating textual and musical content that could be applied to other domains, in particular to games.
This according to “SMUG: Scientific Music Generator” by Marco Scirea, Gabriella A. B. Barros, Noor Shaker, and Julian Togelius, a paper included in Proceedings of the Sixth International Conference on Computational Creativity (Provo: Brigham Young University, 2015, pp. 204–211).
Folkways in Wonderland (FiW) is a cyberworld for musical discovery with social interaction, allowing avatar-represented users to explore selections from the Smithsonian Folkways world music collection while communicating through text and audio channels. FiW is built on Open Wonderland, a framework for creating collaborative 3D virtual worlds.
FiW is populated with track samples from Folkways Recordings. Since acquiring the label in 1987, Smithsonian Folkways has expanded and digitized the Folkways collection while enhancing and organizing its metadata, all of which are now available electronically.
FiW is collaborative: multiple avatars can enter the space, audition track samples, contribute their own sounds (speech or other) to the soundscape, and also communicate through text chat. Nearby users can hear music together, as well as hear and see each other. Wonderland also provides in-world collaborative applications, such as a shared web browser or whiteboard. Thus users are provided with a real-time, immersive, audiovisual representation of the virtual sociomusical environment, together with multiple means of communicating within it.
The book discusses the visual programming language for music and multimedia known as Max. After over two decades of development and application, Max has become a sort of international lingua franca in practically-oriented music, art, and media institutions. A complete cultural-historical survey is presented, in which the software figures as the product of a specific sphere of aesthetic practice, which retroactively evokes innovative production structures. The focus of the analysis thus becomes the reciprocal influences of technological and artistic production.
Below, a demonstration of Percussa AudioCubes, an electronic musical instrument that allows users to create Max/Msp patches using an OSC server.
The main entrance to the New York Public Library for the Performing Arts’s exhibition Lou Reed: Caught between the twisted stars opens up on Lincoln Plaza, directly adjacent to the The Metropolitan Opera house. On a sunny day, the Met’s … Continue reading →
Seven strings/Сім струн (dedicated to Uncle Michael)* For thee, O Ukraine, O our mother unfortunate, bound, The first string I touch is for thee. The string will vibrate with a quiet yet deep solemn sound, The song from my heart … Continue reading →
Introduction: Dr. Philip Ewell, Associate Professor of Music at Hunter College and the Graduate Center of the City University of New York, posted a series of daily tweets during Black History Month (February 2021) providing information on some under-researched Black … Continue reading →
For it [the Walkman] permits the possibility…of imposing your soundscape on the surrounding aural environment and thereby domesticating the external world: for a moment, it can all be brought under the STOP/START, FAST FOWARD, PAUSE and REWIND buttons. –Iain Chambers, “The … Continue reading →