MB: There are three reasons I can think of to justify work in sonification. I think of
them as a three-legged stool, since any one of
them can lead to the other.
One of the best uses of sonification has been
by Wanda Diaz Merced, an astronomer who was
rendered blind by an illness. She developed software that allowed her to listen to graphs. In her
TED talk, she describes quite compellingly how
she is now able to work at the same level she
worked at when she was sighted. What’s more,
by listening to graphs, she was able to detect
the presence of electromagnetic resonances
that no one had noticed in visual graphs, and her
sighted colleagues find they also like working
with the sonification software, as there are often
patterns in the data that are more readily heard
than seen. This is an example of sonification that
was created for reasons of accessibility (one
stool leg), yet brought about as a side effect of
new discoveries (another stool leg).
When I heard that Mickey Hart and George
Smoot were looking for sonifications for their
film, it was an exciting project because I knew
that they would have to work on a musical level
if they were to use them. Here, the purpose was
outreach (leg number three), with the primary
goal of engaging non-scientists.
There are any number of examples of people
gaining insights through the use of sound. Proof
of concept has been established. What strikes
me as more interesting at this point is how it can
evolve from a novelty to a standard method of
research and outreach, alongside of visualization.
LS: Any exciting projects coming up in relation to this technique?
MB: In addition to the two seed grants from the National Academies, I’ve
been working with a meteorologist at Penn
State named Jenni Evans who has had me soni-fy tropical storm data. We’re working on getting
funding to broaden the outreach of these to
large populations of students. I’ll also be working
with Joseph Schlesinger, an anesthesiologist at
the Vanderbilt Medical School, who is committed
to improving the acoustic environment of operating rooms and hospitals, and creating methods
of monitoring and alarms that are not stressful
and cluttered, which is too often the case.
LS: Is there any type of dataset this doesn’t work for?
MB: Whether or not this works depends on what someone is looking for. If
you need to know specific values of data points,
then this is likely not the right tool for the job,
given that data values are typically transposed
to serve as pitches or other auditory parameters.
But if what you need is to understand the behavior of a function, then using the ears to track
dynamics can be extremely useful.
As far as creating sonifications for the purposes of research and discovery goes, time-based
datasets, particularly multi-variate datasets, are
good candidates for sonification. These leverage
the particular strengths of the auditory system.
Non-time-based sets present different kinds
of challenges. Images, which are typically seen
all at once, such as maps, require a different
approach, since they aren’t time-based. Again,
this kind of thing is usually exploratory. If one is
studying the demographics of various regions
and wants to hear the relations of various char-
acteristics (income levels, racial/ethnic/religious
populations, types of industry, etc.) may be well-
served by an interactive representation whereby
an image can be touched or activated by mouse,
and different characteristics of a region can be
Examples of Ballora’s work can be found at
www.markballora.com. “Rhythms of the Universe,” the film by Mickey Hart and George
Smoot that features Ballora’s sonifications, can
be found here.
Lauren Scrudato, Managing Editor
IN THE SPOTLIGHT