Exploring the world of data sonification

exploring-data-sonification-featured-image

Illustration: Jeremy Leung

Just like how we use text and images to better understand a set of data, sonification uses sound to represent information in new and useful ways.

While our eyes are great at quickly interpreting graphs, text, and pictures (a.k.a. visualizations), our sense of hearing is no slouch either. The human body’s ability to detect changes in the location, pitch, intensity, and timing of incoming sound means that hearing can be just as valuable as sight when it comes to understanding data. In this article, let’s explore the history of data sonification and how it’s being used at the forefront of research.

The Geiger counter

Any effective sonification approach needs three things: (1) an input that takes in data, (2) a specific technique or way to transform that data into non-speech sound, and (3) a human at the other end who can listen to and interpret that sound. One of the earliest and probably most recognizable uses of a data sonification technique brings us all the way back to 1908 with the Geiger counter. This device contains a small amount of gas that reacts when it comes into contact with radiation and generates ions. The creation of each ion generates a small amount of electricity which the device converts into audible clicks you can hear through a speaker. The more clicks, the more ions being created and the more radioactive the location is—pretty helpful I’d say!

Seismology and geophysics

While the Geiger counter creates sound from electrical signals, there are many other kinds of data that can undergo sonification. For example, vibrations under the earth’s surface can be measured by a device called a seismometer as a continuous waveform over time—the only difference is that, unlike a musical recording, these frequencies are way too low to be heard. However, transposing the whole waveform by speeding it up not only makes the frequencies audible, but also compresses the time axis so we can hear them over a shorter period of time! This method allows us to observe major seismic events in an entirely new way by speeding up the measurements by a factor of over a thousand.

Brain scans

Things can get even more complex than the two examples above once we get into parameter-based sonification, in which the final sound output relies on multiple streams of data that each control a particular sonic variable. Electroencephalograms (or EEGs for short) measure electrical activity in the brain over time and are commonly used in scientific research and medical diagnostics. The intensity, brain location, and rhythmic nature of these signals are valuable and can each be mapped differently through data sonification.

For example, electrodes at different brain locations can each map to a particular pitch or spot in the stereo field, and moments of relative signal intensity (when things spike up or down) can trigger a sound event. Our powerful ears then do the rest, identifying spots in time when the sounds take on a certain rhythm that could correspond to abnormal brain activity. Recent studies have even shown benefits to using a sound-based approach over a more traditional visual one when it comes to quickly identifying epileptic seizures using EEG data.

Image as sound

We can bridge the gap between sight and hearing even more by taking two-dimensional images and giving them a sonic representation. Image sonification involves creating systems that assign visual properties like color and saturation to musical parameters like pitches and chords, and triggers those events as someone “moves” through an image in real time (using a mouse cursor or by dragging a finger across a touch screen, for example). You can also apply similar methods to captured video as a way to provide sonic feedback of your surroundings and detect potential obstacles. When it comes to enhancing the experience of visual and physical environments for the visually-impaired, research has found that these kinds of sonification approaches could hold real benefits.

While visually representing information is obviously not going anywhere, there’s also no doubt that sonification can often provide a whole new (and often valuable) perspective on things! Do you know of any other types of data sonification? Let us know in the comments below.


Explore royalty-free one-shots, loops, FX, MIDI, and presets from leading artists, producers, and sound designers:

November 19, 2021

Matteo Malinverno Matteo Malinverno is a New York-based music producer currently working on the Content team at Splice.