Illustration: Islenia Milien
1877 is credited as “the foundation of modern sound recording culture.”
Using Thomas Edison’s phonograph, sound waveforms etched into grooves on a record plate could be played back audibly using those same vibrations that went into it.
Edison’s extraordinary invention launched our ability to document countless hours of audio and to communicate the beauty of multicultural music globally. However, this approach becomes quite monotone once you consider all the ways sound can be and has been documented over time. Not just on records, CDs, and mp3s, but on vases, sooted glass, and even chip bags and plants – with and without standardized systems, in ways that extend beyond a singular sense.
A brief lesson on how an ear hears
To understand the different ways sound can be captured, it’s helpful to take a brief look at the physiology of the ear and how a human actually hears – a hotly contested debate among 17th and 18th-century otologists. Today, we know that the basics of human hearing involve waves that travel from the outer ear into the inner ear.
To explain without getting too deep into oto-anatomy, a tiny little piston, called the stape, allows these vibrations to enter the cochlea (in the inner ear), as it pushes liquid into the cochlea so sound can move in. The cochlea splits these sounds into its various frequencies of high, low, and mid-range. Signals to the brain tell us that we are hearing something.
Humans are only able to hear sounds in the range of 20 Hz and 20 kHz. Understandably, there is much wonder, and disagreement, surrounding how humans actually receive sounds. This field of scientific study is known as psychoacoustics. As Paul Oomen wrote for the Red Bull Music Academy, “Ultimately, all sound that we perceive is psychoacoustic. As soon as sound passes through the ears, it stops being a physical phenomenon and becomes a matter of perception.”
From physical sound to perception: The first visual documentation of sound
How does the perception of sound become as or more important than the sounds themselves? In the rest of this article, I provide examples of how humans have documented sound using mechanics directly related to the ear, and then those that stretch beyond the realms of anatomy.
First, I think about Antonio Scarpa’s contribution to the research of oto-anatomy, specifically the inner ear organs. His book Anatomical Observations of the Round Window (1772) gave the first in-depth description of this part of the ear of the same name, which indicates that we may be able to see through hearing and therefore engage with sound beyond one sense.
Were it not for the exploration of the ear’s anatomy, “humanity’s first recordings of its own voice” would not have been captured. I don’t mean audibly like Edison’s machine, but a visual representation of the voice – of sound.
Around 1853, Edouard-Leon Scott de Martinville’s invention of the phonautogram was a model based on the mechanics of the human ear. It used “a funnel to concentrate sound waves onto an eardrum-like membrane with a stylus attached to its underside that trailed against a moving surface blackened with soot from an oil lamp [called lampback],” leaving a visible trace behind (as pictured below).
Scott was “recording” music before recordings even existed. Following photography’s ability to capture a visual image in 1826, Scott postulated that if we could capture an image, why not be able to capture a word with an instrument that captures the voice’s “tonality, its intensity, its timbre.”
In 1877, for those who are hearing impaired, Alexander Graham Bell worked on the ear phonautograph (a preceding invention of the telephone) – “a second-generation Visible Speech machine that used an actual eardrum, attached to a stylus, to inscribe speech waves on a plate of sooted glass.”
Ten years prior, Graham Bell’s father, Melville Bell, developed a system called Visible Speech – “a system of phonetic symbols… to represent the position of the speech organs in articulating sounds.” At this time, “graphic inscription was known as the ‘universal language of science’… for its ability to visualize the waveforms of which all the world’s motions and sensory phenomena seemingly consisted,” as told in David Novak and Matt Sakakeeny’s Keywords in Sound.
Fast forward to 2014 to an invention inspired by the physiology of the ear that recorded sound using a chip bag. As we mentioned before, sound is literally vibrations. Measuring soundwaves, MIT researchers extracted information from vibrations on a bag of chips, treating it as a visual microphone and using a high-speed video camera to document the resulting movements through soundproof glass. And yes, these could be extended to other objects in the room such as a plant.
But still, how can we think of documenting sound by expanding our notion that the ear is not the only conduit for interpreting sound? To really begin understanding this, we must think of the differences between hearing and perceiving.
From standard to graphic: The visual evolution of notation and perception
From 1st century CE to today, music and the human voice have been captured beyond the literal vibrations of sound, beyond an audio recording, or even a visual waveform. But how?
This section will ask us to embrace not a fixed capture of sound, but rather to indulge in the variation and interpretation in its transcription. Humans have been trying to comprehend this for a long time through the study of psychoacoustics, as early as when “Pythagoras first heard the sounds of vibrating strings and of hammers hitting anvils in the 6th century” BCE.
In the 13th century, preceding the phonograph, we saw the development of standardized western notation in the form of five staff lines. Standard notation provided a more fixed way of transcribing, translating, and passing down music. Initially, it was mostly used by monks and in the church system for the passage of songs. Later, composers and artists embraced and further developed and refined it for their own uses.
There is some room for variation – standard notation can be interpreted by different instrumentation, thereby changing the timbre of the piece; the vigor of the player may interpret variations affecting its dynamism. This type of notation allows one to capture pitch, intensity, and rests. But, western notation is not intended to accommodate all kinds of music, such as those from cultures that include microtonal instruments. Other cultures have adjusted to fit it. Overall, we can determine there is little room for error and the performer’s interpretation.
Electronic music and the embrace of graphic notation
What if we didn’t rely on standardization, or on traditional visual symbols to document music?
In the 1950s and ’60s, we saw the embraced use of graphic notation with experimental, electronic, and computer musicians such as John Cage, Karlheinz Stockhausen, and Pauline Oliveros. The lack of standardization reflected the nature of the music and their composers because nothing about what they were creating was typical. The graphic notation was often complemented by an actual piece of recorded music or performance and could exist as a standalone or a companion to such. Graphic notation was not necessarily a transcription of sound, timbre, and intensity, but of the essence and feeling of a piece.
Graphic notation by contemporary musicians is a statement that five-staff musical notation is not sufficient in capturing essence, texture, and improvisation, or even pitch because the instrumentation being used often exceeded the limits of human hearing. A graphic inscription of the voice or sound, or a recording of a piece, was not adequate either.
Having been derived not from a standard but from an individual confirms graphic notation’s unconventional and nonconformist nature. There is more room for improvisation, indeterminacy, and chance. While western notation is meant to be passed down from individual to individual, graphic notation was sometimes not meant to be passed down at all.
Graphic notation also has the capability to introduce participatory and conversational aspects between composer and performer, who can agree on a way to interpret a score for a performance. Or, a score can be written out in text to accompany it.
Without standardization, graphic notation allows a level of trust in another performer, giving them agency over a composer’s piece. We see an embracement of unpredictability. It’s impossible to have one consensus, and therefore, iteration. And, interpretation will be different from person to person because each one perceives sound differently, not in the way they read music. The agency of graphic notation is congruous to the autonomy of an experimental composer’s approach to creation.
As Galia Hanoch Roe aptly states in their essay “Musical Space and Architectural Time: Open Scoring versus Linear Processes,” graphic scores allow “movement to the performer and [allow them] to move freely or randomly about the musical work. In such constructions, the function of the musical score changed from an object to be read by the performer into a process to be built.”
What graphic notation looks like
Some graphic notation combines traditional notation and a musician’s own, such as with Ludwig Hirschfeld-Mack’s score for Dreiteilige Farbensonatine (Ultramarin-grün) (Three Part-Color Sonatina [Ultramarine-Green]), and also the first piece of known graphic notation by a Renaissance composer Baude Cordier mentioned here.
Ludwig Hirschfeld-Mack’s score for Dreiteilige Farbensonatine (Ultramarin-grün) (Three Part-Color Sonatina [Ultramarine-Green]) (1923), © Kaj Delugan
Some mix text with imagery, as with John Cage’s “Water Walk.”
John Cage’s score for Water Walk (1959)
Some are completely abstract and seem to create a unique language, as with many of Iannis Xenakis’ scores and Brian Eno’s graphic notation which is not necessarily meant to be read or interpreted by another composer; it is rather a documentation of a feeling.
Iannis Xenakis’ graphic notation for Metastasis 1 (1953 – 1954) for a 61-piece orchestra
Brian Eno’s graphic notation for Music for Airports (1978)
Many contemporary composers, such as Lea Bertucci and Justin Frye, still actively use graphic notation in their practices as well, which I had the pleasure of highlighting in a recent zine I published. The example included here by Justin Frye (of PC Worship) was interpreted in 2010 at the Roulette Benefit Easy Not Easy Festival by a cast of nine talented musicians. An archive of the performance is available.
Justin Frye’s graphic score for AK47 (2010)
A look at ancient notation
People were trying to document music long before graphic notation, before western standardized musical notation, before Scott. Those notations captured essence, texture, and structure, and were indeed, unpredictable (and still are today). Here, we get an open level of interpretation because we lack both the standardization of notation and the recordings. These examples leave us wondering and deciphering today, what these creators at the time meant.
The Greek Seikilos epitaph is the earliest-known complete notation. Deciphering the falls and rises of the accents and its coordination of pitches, experts have been able to determine some sort of melody for this.
We also have ancient neumes, used more on an institutional level — churches, sacred spaces, etc. — such as the Armenian khaz. Each town and each monastery has its own interpretation of how to read khaz in their manuscripts in terms of melodic patterns, tempo, etc., all passed down by oral tradition and collective memory – thus relaying a bit about the characteristic of each community. There was no wrong tradition; the interpretations of the above are still not settled. Today, the beauty in analyzing these notations, some of which are still used in some practices today, is that there is no one right way to interpret them.
With interpretation, we allow for happy accidents. There is room for ‘error’ but also, inherently no error is possible. As Cornelius Cardew has said, “The notation is more important than the sound. Not the exactitude and success with which a notation notates a sound; but the musicalness of the notation in its notating.”
At the very basics, the similarity across all these examples is human’s desire to document, to preserve.
As we gather everything we have just reviewed, let’s take a second to think back on the ear’s round window. Considering that we can engage with sound by seeing, why stop there? Perceiving is also beyond any one sense, or two senses. When we think of sound beyond the ear, not only does the space sound inhabits grow, but an interpreter is able to reach out of their expected bounds. Why separate these notions of hearing and perceiving at all?
February 25, 2021