fox@fury
Audio Interfaces and Ubiquitous Computing (MindChimes Part II)
Tuesday, Jan 30, 2001
The human brain is an incredibly powerful pattern recognizer, especially when it comes to sound. Yes, a lot of it is in the hardware (the cochlea in the inner does hardware frequency analysis, meaning you don't have to try Fourier analysis in your head, literally) but a whole lot, including stereo separation (through a combination of phase displacement, relative amplitude, and frequency dampening, where you can tell a sound is behind you because things sound different when they have to pass through your earlobe before entering the ear canal), threat identification, sonar (it's not just for bats and dolphins. we may not actively ping, but people can tell, with their eyes closed, whether they're 2 feet or 10 feet away from a featureless wall, by the way the room sounds different), and a whole slew of other preattentive tasks.

Just like in visual search, commonly performed audio identification tasks can migrate from attentive to pre-attentive behavior. For example, upon hearing a phone ring, many people will automatically move to pick it up, without making a conscious effort to recognize that it is actually a phone that is ringing. Similarly, many people find themselves thinking of their spouse, roommate, parent, or child, moments before they enter the house because, without realizing it, they heard the faint sound of that person closing their car door outside, or coming in the driveway, or what-have-you.

The above is a long-winded way of saying that the brain does a lot with sound, and it's not all hard-wired stuff. We learn to unconsciously recognize sounds that are meaningful to us, and ignore those that aren't.

Example: Way back in 1984 I was lucky enough to get a Mac 128K as my first computer. I would stay up long after my mother went to sleep, working and playing on it and, ever the curteous housemate, I'd wear headphones at night. Now the Mac back then wasn't exactly the paragon of audio output that it is now, and it 'leaked' a lot. When it would read from the floppy drive, you could hear faint blips. When it was reading and writing to the RAM chips, you could hear faint but distinct frequencies. Almost everything the computer did had some effect on the audio-out, and while at first it was bothersome, after a few days or weeks, I came to understand it, like a foreign language.

Watching the computer go through its paces, while also being able to 'hear its thoughts', I gained enough of an understanding that I could shut my eyes and know, to some degree, what the computer was doing. It sounded this way when it was starting up Microsoft Word, and it sounded that way when it was getting ready to print.

Of course, this was all accidental, both on my part and on the part of Apple's engineers, but my brain figured out how to interpret it anyhow, without any intentional effort from me. The key here is, that while in that case it was an accident, there's absolutely no reason we can't tap audio as a passive means to understand our surroundings.

As we spend more and more time around electronic devices, we know less and less about our immediate surroundings because these devices tend to have two modes: utterly silent or screaming for attention. Visually they may be more accomodating; your phone may show that you have voicemail waiting, or your computer may show that you have 12 new emails, but sonically they scream and shut up.

"Bing!" You've got mail! "Riiinnggg!!" Pick up your cellphone! Electronics use sound 'in the moment' to let you know temporal data right when you need to know it, and that's good, but they're bad at informing you of status. If you want to know how much room you have on your credit card, you have to make a query and get the information explicitly. If you're getting low on hard drive space, you might not know until you're trying to save your file and you get a 'disk full' error.

Ubiquitous computing isn't about putting circuitry into golf balls and pens, it's about being able to convey information between people and electronics (and consequently, often other people) using the modes of communication that people have evolved over millions of years. Voice instead of keyboards, ambient presentation of data (wind, temperature, light) instead of explicit requests for common information. Telling generally what time it is by seing the nature of the light in the world instead of looking at a watch.

Okay, once again I started out intending to talk about one thing and getting sidetracked on to another. What I originally wanted to say was that Andrew York, the guy who wrote MindChimes (mentioned yesterday) wrote back and is excited about the idea, so when he gets back from his current business trip next week, he and I are going to talk about ways to adapt MindChimes and OceanSongs to be front-ends for ambient data.

If you like it, please share it.
aboutme

Hi, I'm Kevin Fox.
I've been blogging at Fury.com since 1998.
I can be reached at .

I also have a resume.

electricimp

I'm co-founder in
a fantastic startup fulfilling the promise of the Internet of Things.

The Imp is a computer and wi-fi connection smaller and cheaper than a memory card.

Find out more.

We're also hiring.

followme

I post most frequently on Twitter as @kfury and on Google Plus.

pastwork

I've led design at Mozilla Labs, designed Gmail 1.0, Google Reader 2.0, FriendFeed, and a few special projects at Facebook.

©2012 Kevin Fox