Ear to the Sky: NASA’s New Soundcloud

NASA has a Soundcloud, and it’s every bit as awesome as you’d imagine.

Part historical archive and part sound effects library, there’s an entire set dedicated to JFK’s quotes about Apollo 11:

As well as snippets of sound from space. My top three favorites are:

This is one of a series of star light curve waves captured by the Kepler mission and converted to sound. Note its rhythmic, periodic components.

Radio waves or wind in the tundra?

And finally, Voyager captures a sine wave sweep along with Jupiter’s lightning.

There’s many more sounds to be explored at the full Soundcloud page, including mission talk, rocket launches, and the strangely adorable sound of Juno saying hi in morse code. Not only is this further proof that NASA knows what it’s doing when it comes to social media (who else follows the Curiosity on Twitter?) this is also a fantastic new resource for composers and sound designers. But even going beyond that: NASA has provided a new way to experience our forays into the unknown. At a time when everyone could use a little inspiration, this is a wellspring. I hope it captures the public’s imagination.

Analog Vs. Digital: The Age-Old Question

Video

Can you really hear a difference between analog and digital recording? What about in synthesis? In a personified rap battle between analog and digital, who would win?

NYU students, having nothing better to do at our ivory tower of academia (I mean, it’s not like we live in one of America’s biggest commercial and cultural centers or anything) love contemplating age-old questions such as these. You may remember some freshmen who previously did ponder whether was better to auto-tune or to not auto-tune. And while these questions may prove inconclusive, the best part is, as always, in the debate.

 

 

So our freshmen rose to the occasion once again to bring forth this rousing performance; a call to arms for their fellow rap battlers. And a cry heard throughout the land did ring: Who won? Who lost? You decide!*

*No seriously, leave a comment below.

DIY: Build Your Own Microphone!

This how-to video explains the process of building a dynamic microphone (which, incidentally, can also be used as a loud speaker) from a cup! This rudimentary audio transducer could be used as a quick project for a physics class exploring electromagnetism or an audio technology class exploring transduction. Or you could do it just for kicks.

The fidelity of the completed project is not studio quality (if it was our lives would be a whole lot cheaper), but it’s cool. And on the upside you don’t have to have an EE degree to build it.

Credits:

  • Written and Directed by Travis Kaufman and Nick Dooley
  • Produced with support from The National Science Foundation

Can Speakers be Used as Microphones?

A door once opened can be stepped through in either direction…

Okay, we promise that we’re serious people when we’re not making Doctor Who references (but we are never not making Doctor Who references so…paradox?). This video shows how a speaker, once removed from its enclosure, can be used as either a speaker or a microphone, thus exhibiting the beauty of transduction! Specifically, this is a good example of how electromagnetic transduction can work in both directions (electrical to acoustic transduction and acoustic to electrical).

Credits:

  • Directed by: Langdon Crawford
  • Voice:  Tyler Mayo
  • Editing Caitlin Gambill

Hearing and the Ear: An Introduction

We’re going to be frank: it troubles us when musicians don’t take care of their ears. Because hearing is super important seeing as it’s the basis of what we do. But, as important as hearing is, how many of us actually know how it works? Physically, mechanically, acoustically?

10.1371 journal.pbio.0030137.g001-L-A

10.1371 journal.pbio.0030137.g001-L-A (Photo credit: Wikipedia)

Let’s talk about how sound enters your ear. We have, of course, the external part of our ears. Without getting into it too deeply, this part of our ears channels sound vibrations into the ear canal. The ear canal , also known as the external auditory canal, leads from the outer ear to the middle ear. Incidentally, the ear canal itself has a resonant bias in the frequency range of 2k Hz to 7k Hz, which means that our ears are attuned to the frequencies of human speech.

English: View-normal-tympanic-membrane

English: View-normal-tympanic-membrane (Photo credit: Wikipedia)

As the ear canal channels this air fluctuation, it causes the tympanic membrane (illustrated right) to move. The membrane vibrates with the compression and rarefaction of the sound wave: moving inward with the compression phase, and outward with rarefaction.

Auditory Ossicles in the Middle Ear

Wikimedia Commonscauses the ossicles ( 3 tiny bones in the middle ear: the the malleus, ) to move.

Diagrammatic longitudinal section of the cochlea.

Diagrammatic longitudinal section of the cochlea. (Photo credit: Wikipedia

This in and out motion in turn causes the ossicles (three tiny cranial bones in the middle ear: the malleus, incus, and stapes) to move. These bones act as complex levers have to concentrate the force applied to the relatively large surface area of tympanic membrane to suit the relatively small opening, the oval window, that leads into our inner ears. Specifically, the oval window opens into the cochlea.

The cochlea, an organ that looks kind of like a snail shell, is where the mechanical energy of the sound’s vibration is converted into a neural signal. The cochlea is hollow and filled with fluid and lots of different things that are anatomically fascinating, but we won’t really discuss in this post. One thing that we will talk about, however, is the basilar membrane, which is suspended in the cochlea.

Sinusoidal drive through the oval window (top)...

(Photo credit: Wikipedia)

When sound waves enter the cochlea’s oval window they resonate the fluid inside, producing standing waves. This process decomposes complex sounds into their simplest, sinusoidal components. These standing waves adhere to distinct locations along the basilar membrane: locations which are determined by the waves’ frequencies. As seen at right, the lower the frequency the larger the amount of space on the membrane the standing wave takes up and vice versa. These sinusoidal standing waves cause the basilar membrane to move and thus cause the hair-like structures on the organ of corti to vibrate as well.

English: Organ of Corti

English: Organ of Corti (Photo credit: Wikipedia)

The organ of corti contains several layers of hair cells, and the nerve endings on these hair-like structures is where the actual transduction from mechanical energy to nervous impulse takes place. We talk about transduction possibly too much, but this is an important topic! Make note here, because this is how a sound vibration is translated into a signal the brain can understand. This neural signal is sent to the brain’s stem and cerebral cortex, where it interprets sound.

The anatomy of hearing as well as the study of the ear is its own science, and we’ve just barely the surface. Check out the links and some related articles for more!

Frequency Domain and EQ Basics

You see frequency domain all the time when you use audio equalizers, but how clear are you on what that is, exactly? Learn how to master any EQ/spectral analysis tool by watching this video on exactly what the frequency is, why it’s important in the music/audio field, and how—if you do any mixing whatsoever—you come across it all the time.

Once you’ve got this part down, you may be interested in learning about the actual method used to get a sound representation from the time domain to the frequency domain. If so, check out this link for more: http://zone.ni.com/devzone/cda/ph/p/i….

Credits:

  • Written and Directed by Travis Kaufman and Nick Dooley
  • Produced with support from The National Science Foundation