The Science of Music’s Secret Origin Story

The truth is, the Science of Music wasn’t always a blog. Or a YouTube channel. And it definitely didn’t start out as a Twitter or Facebook page.

I’ll give you a moment to stifle your gasps.

This project actually began in 2010 as an after school program. The four-part workshop was presented by NYU MARL’s Science of Music team at the Institute for Collaborative Education (ICE). We had then, as we do now, the same goals of spreading the joys of music and technology far across the land. That will never change.

And now you know our secret origin story. Things will never be the same.

Photos by Eric Humphrey and Pia Blumenthal.

Credits:

  • Editing and Music by Langdon Crawford
  • Produced with support from The National Science Foundation

DIY: Build Your Own Microphone!

This how-to video explains the process of building a dynamic microphone (which, incidentally, can also be used as a loud speaker) from a cup! This rudimentary audio transducer could be used as a quick project for a physics class exploring electromagnetism or an audio technology class exploring transduction. Or you could do it just for kicks.

The fidelity of the completed project is not studio quality (if it was our lives would be a whole lot cheaper), but it’s cool. And on the upside you don’t have to have an EE degree to build it.

Credits:

  • Written and Directed by Travis Kaufman and Nick Dooley
  • Produced with support from The National Science Foundation

DIY: Graphite and Paper Mixer

Once you grasp the concepts behind your gear, you can translate that knowledge into making your own, albiet much simpler, versions of that equipment. This video is a short how-to guide and demonstration for using graphite pencils, paper and wires to make a mixer.  With less than $5 worth of materials, you too can make a basic mixer!

Disclaimer: some pencils were harmed in the making of this video.

Credits:

  • Directed by Langdon Crawford
  • Edited by Sarah Streit
  • Theme Music by Tate Gregor
  • Audio loops collected by Nick Dooley
  • Produced with support from The National Science Foundation

Can Speakers be Used as Microphones?

A door once opened can be stepped through in either direction…

Okay, we promise that we’re serious people when we’re not making Doctor Who references (but we are never not making Doctor Who references so…paradox?). This video shows how a speaker, once removed from its enclosure, can be used as either a speaker or a microphone, thus exhibiting the beauty of transduction! Specifically, this is a good example of how electromagnetic transduction can work in both directions (electrical to acoustic transduction and acoustic to electrical).

Credits:

  • Directed by: Langdon Crawford
  • Voice:  Tyler Mayo
  • Editing Caitlin Gambill

Hearing and the Ear: An Introduction

We’re going to be frank: it troubles us when musicians don’t take care of their ears. Because hearing is super important seeing as it’s the basis of what we do. But, as important as hearing is, how many of us actually know how it works? Physically, mechanically, acoustically?

10.1371 journal.pbio.0030137.g001-L-A

10.1371 journal.pbio.0030137.g001-L-A (Photo credit: Wikipedia)

Let’s talk about how sound enters your ear. We have, of course, the external part of our ears. Without getting into it too deeply, this part of our ears channels sound vibrations into the ear canal. The ear canal , also known as the external auditory canal, leads from the outer ear to the middle ear. Incidentally, the ear canal itself has a resonant bias in the frequency range of 2k Hz to 7k Hz, which means that our ears are attuned to the frequencies of human speech.

English: View-normal-tympanic-membrane

English: View-normal-tympanic-membrane (Photo credit: Wikipedia)

As the ear canal channels this air fluctuation, it causes the tympanic membrane (illustrated right) to move. The membrane vibrates with the compression and rarefaction of the sound wave: moving inward with the compression phase, and outward with rarefaction.

Auditory Ossicles in the Middle Ear

Wikimedia Commonscauses the ossicles ( 3 tiny bones in the middle ear: the the malleus, ) to move.

Diagrammatic longitudinal section of the cochlea.

Diagrammatic longitudinal section of the cochlea. (Photo credit: Wikipedia

This in and out motion in turn causes the ossicles (three tiny cranial bones in the middle ear: the malleus, incus, and stapes) to move. These bones act as complex levers have to concentrate the force applied to the relatively large surface area of tympanic membrane to suit the relatively small opening, the oval window, that leads into our inner ears. Specifically, the oval window opens into the cochlea.

The cochlea, an organ that looks kind of like a snail shell, is where the mechanical energy of the sound’s vibration is converted into a neural signal. The cochlea is hollow and filled with fluid and lots of different things that are anatomically fascinating, but we won’t really discuss in this post. One thing that we will talk about, however, is the basilar membrane, which is suspended in the cochlea.

Sinusoidal drive through the oval window (top)...

(Photo credit: Wikipedia)

When sound waves enter the cochlea’s oval window they resonate the fluid inside, producing standing waves. This process decomposes complex sounds into their simplest, sinusoidal components. These standing waves adhere to distinct locations along the basilar membrane: locations which are determined by the waves’ frequencies. As seen at right, the lower the frequency the larger the amount of space on the membrane the standing wave takes up and vice versa. These sinusoidal standing waves cause the basilar membrane to move and thus cause the hair-like structures on the organ of corti to vibrate as well.

English: Organ of Corti

English: Organ of Corti (Photo credit: Wikipedia)

The organ of corti contains several layers of hair cells, and the nerve endings on these hair-like structures is where the actual transduction from mechanical energy to nervous impulse takes place. We talk about transduction possibly too much, but this is an important topic! Make note here, because this is how a sound vibration is translated into a signal the brain can understand. This neural signal is sent to the brain’s stem and cerebral cortex, where it interprets sound.

The anatomy of hearing as well as the study of the ear is its own science, and we’ve just barely the surface. Check out the links and some related articles for more!

Frequency Domain and EQ Basics

You see frequency domain all the time when you use audio equalizers, but how clear are you on what that is, exactly? Learn how to master any EQ/spectral analysis tool by watching this video on exactly what the frequency is, why it’s important in the music/audio field, and how—if you do any mixing whatsoever—you come across it all the time.

Once you’ve got this part down, you may be interested in learning about the actual method used to get a sound representation from the time domain to the frequency domain. If so, check out this link for more: http://zone.ni.com/devzone/cda/ph/p/i….

Credits:

  • Written and Directed by Travis Kaufman and Nick Dooley
  • Produced with support from The National Science Foundation

How to Use an SPL Meter

This video explains how to use an Sound Pressure Level (SPL) meter.  This is an essential tool for measuring intensity (think amplitude or volume) of a sound.   This is different than our perception of loudness, thus a specialized instrument (the SPL meter) is needed.


Credits:

  • Written and Directed by Nick Dooley and Travis Kaufman
  • Produced with support from The National Science Foundation