The Science of Bass Drops!

Video

Edit April 3, 2014: Hey everyone! Hope no one minded our little April Fool’s joke. That said, we’d love to actually make this video. Who wants to do some research? (We do.)

In this video, we explore the science behind bass drops. Why are bass drops so popular in electronic music? Are we responding to pitch the same way cognitively as we do to dynamics? Join us as we explore just why bass drops are so popular. Is it the tension and release? Does it cognitively have to do with the way the pitch drops? Are we, in fact, evolutionarily inclined to love the bass? Or, at least, to respond to it.This video is best listened to on Beats headphones or the phattest sub you own. Please don’t use your Macbook speakers.

 

A shout out to producer Evan Scott for providing the music for this week’s episode. https://soundcloud.com/evanscottproducer

NAMM, Loudness Wars, and Grammys (but not in the way you think): Weekly Roundup for January 19-26th

We’re trying something new here at Science of Music: from now on we’ll give the low-down on all things that are at once science-y, tech-y, and music-y in the news once a week. Watch this space for more.

NAMM 2014

We’re keeping our eye on the “best of” lists and products coming out of NAMM since we can’t actually be there…which we’re still getting over for more reason than one. (I mean, just for the weather alone, right?) xlr8r has a take that covers the good, bad, and weird while Line 6 has created an unholy guitar-amp-bluetooth-speaker-iOS-integration combo.

But what we really, really want is this right here. Korg has announced a build-it-yourself kit that will let you build your own MS-20 synth. This analog, monophonic synth comes pre-disassembled (which is a little sad, because some of the fun is tearing something apart) giving you the chance to put it together. For the ultimate consumer-tinkering-friendly experience, no soldering or knowledge of schematics are required. The MS-20 kit is expected to be out in March for around 1,400 USD.

Breaking Genre

British Singer Katherine Jenkins says her record sales game is too strong to ignore. She claims her “crossover” pop-classical style of singing has become so popular it’s “becoming its own thing.” In fact, the Telegraph is referring to it as the “crossover” genre. To this, we say: what defines a musical genre, anyway?

A 1981 study by Franco Fabbri defines genre as “a set of musical events (real or possible) whose course is governed by a definite set of socially accepted rules.”

More recently, companies like Echo Nest (which supplies Spotify with data) are mining for these rules with—according to their website—over 35,000 songs and over 1 trillion data points. With user data refining such an enormous machine will the algorithm become the ultimate genre codifier?

A quick borrowing of my sister’s Spotify account showed a biography that called Jenkins “classical” but that her related artists contained singers like Sarah Brightman, Andrea Bocelli, and Charlotte Church, all of whom are known for their pop take on classical singing. Then again, it also had serious classical musician Howard Blake and non-classical songwriter Emmy Rossum.

Crossover as a genre? Maybe.

Peace in Our Time

Hugh Robjohns of Sound on Sound covers the protracted end of the “Loudness Wars” in the magazine’s February issue. Mastering engineer Bob Katz declared an end to the wars at AES in October, but how will we keep the peace?

In the noise of modern society, everyone is clamoring to get heard. In particular, recorded audio has been trying to “out-loud” itself for some time now, which has led to the loss of dynamic range, over-compression, and just bad sound in general. But with new technology that will put a smarter limit on audio, either broadcasted or streamed, the war may be over. The rub? Overly loud mixes will probably end up sounding “feebler” over the new system. So shut up or sound bad.

Dead Musicians’ Society

The Providence Journal ran a commentary piece about the importance of music education in schools while an upstate New York music teacher receives the first-ever Grammy in Music Education.

Of note in the PJ commentary is the idea that musicianship is not a 21st century skill. Of course not: it’s a 23rd century skill. Who else is going to teach our cyborg overlords how to play the violin? They’re going to want to be proper gentlemen, after all.

Got a tip? Send us a message at scienceofmusicnyu@gmail.com , or , or post it to our Facebook page.

Introduction to Resistors

Video

In a continuation of our Introduction to Electronic Components series we present (drum roll please) resistors!

Resistors…resist. It’s what they do. Well, they don’t resist everything. They won’t help you resist the temptations of the dark side, for example. But these little guys are useful components when it comes to regulating the amount of electricity with which you want to work.

Interested in more? Check out our full playlist here.

A New Project: DIY Electric Slide Guitar

Video

Our compatriots never cease to amaze us.

Student and guitar pro Adam November guests on this episode of the Science of Music to show us all his homemade electric slide guitar! This DIY project was made for an acoustics class at NYU’s music technology program. Watch as Adam shows off his creation and explains how you can make your own guitar from materials easily purchased at your local hardware store.

The Science of Music Gets Its Own YouTube Channel!

Video

Hello Blogosphere!

These are your friendly music-to-science emissaries from NYU MARL announcing a new YouTube channel specifically for the Science of Music!

In this video we outline our ongoing mission is to bridge the world of music to engineering, science, and technology. And we also give you a preview for what’s to come, so tune in for a weekly demonstration or explanation from New York University’s Music and Audio Research Laboratory!

DIY: Graphite and Paper Mixer

Once you grasp the concepts behind your gear, you can translate that knowledge into making your own, albiet much simpler, versions of that equipment. This video is a short how-to guide and demonstration for using graphite pencils, paper and wires to make a mixer.  With less than $5 worth of materials, you too can make a basic mixer!

Disclaimer: some pencils were harmed in the making of this video.

Credits:

  • Directed by Langdon Crawford
  • Edited by Sarah Streit
  • Theme Music by Tate Gregor
  • Audio loops collected by Nick Dooley
  • Produced with support from The National Science Foundation

Can Speakers be Used as Microphones?

A door once opened can be stepped through in either direction…

Okay, we promise that we’re serious people when we’re not making Doctor Who references (but we are never not making Doctor Who references so…paradox?). This video shows how a speaker, once removed from its enclosure, can be used as either a speaker or a microphone, thus exhibiting the beauty of transduction! Specifically, this is a good example of how electromagnetic transduction can work in both directions (electrical to acoustic transduction and acoustic to electrical).

Credits:

  • Directed by: Langdon Crawford
  • Voice:  Tyler Mayo
  • Editing Caitlin Gambill

Hearing and the Ear: An Introduction

We’re going to be frank: it troubles us when musicians don’t take care of their ears. Because hearing is super important seeing as it’s the basis of what we do. But, as important as hearing is, how many of us actually know how it works? Physically, mechanically, acoustically?

10.1371 journal.pbio.0030137.g001-L-A

10.1371 journal.pbio.0030137.g001-L-A (Photo credit: Wikipedia)

Let’s talk about how sound enters your ear. We have, of course, the external part of our ears. Without getting into it too deeply, this part of our ears channels sound vibrations into the ear canal. The ear canal , also known as the external auditory canal, leads from the outer ear to the middle ear. Incidentally, the ear canal itself has a resonant bias in the frequency range of 2k Hz to 7k Hz, which means that our ears are attuned to the frequencies of human speech.

English: View-normal-tympanic-membrane

English: View-normal-tympanic-membrane (Photo credit: Wikipedia)

As the ear canal channels this air fluctuation, it causes the tympanic membrane (illustrated right) to move. The membrane vibrates with the compression and rarefaction of the sound wave: moving inward with the compression phase, and outward with rarefaction.

Auditory Ossicles in the Middle Ear

Wikimedia Commonscauses the ossicles ( 3 tiny bones in the middle ear: the the malleus, ) to move.

Diagrammatic longitudinal section of the cochlea.

Diagrammatic longitudinal section of the cochlea. (Photo credit: Wikipedia

This in and out motion in turn causes the ossicles (three tiny cranial bones in the middle ear: the malleus, incus, and stapes) to move. These bones act as complex levers have to concentrate the force applied to the relatively large surface area of tympanic membrane to suit the relatively small opening, the oval window, that leads into our inner ears. Specifically, the oval window opens into the cochlea.

The cochlea, an organ that looks kind of like a snail shell, is where the mechanical energy of the sound’s vibration is converted into a neural signal. The cochlea is hollow and filled with fluid and lots of different things that are anatomically fascinating, but we won’t really discuss in this post. One thing that we will talk about, however, is the basilar membrane, which is suspended in the cochlea.

Sinusoidal drive through the oval window (top)...

(Photo credit: Wikipedia)

When sound waves enter the cochlea’s oval window they resonate the fluid inside, producing standing waves. This process decomposes complex sounds into their simplest, sinusoidal components. These standing waves adhere to distinct locations along the basilar membrane: locations which are determined by the waves’ frequencies. As seen at right, the lower the frequency the larger the amount of space on the membrane the standing wave takes up and vice versa. These sinusoidal standing waves cause the basilar membrane to move and thus cause the hair-like structures on the organ of corti to vibrate as well.

English: Organ of Corti

English: Organ of Corti (Photo credit: Wikipedia)

The organ of corti contains several layers of hair cells, and the nerve endings on these hair-like structures is where the actual transduction from mechanical energy to nervous impulse takes place. We talk about transduction possibly too much, but this is an important topic! Make note here, because this is how a sound vibration is translated into a signal the brain can understand. This neural signal is sent to the brain’s stem and cerebral cortex, where it interprets sound.

The anatomy of hearing as well as the study of the ear is its own science, and we’ve just barely the surface. Check out the links and some related articles for more!

Frequency Domain and EQ Basics

You see frequency domain all the time when you use audio equalizers, but how clear are you on what that is, exactly? Learn how to master any EQ/spectral analysis tool by watching this video on exactly what the frequency is, why it’s important in the music/audio field, and how—if you do any mixing whatsoever—you come across it all the time.

Once you’ve got this part down, you may be interested in learning about the actual method used to get a sound representation from the time domain to the frequency domain. If so, check out this link for more: http://zone.ni.com/devzone/cda/ph/p/i….

Credits:

  • Written and Directed by Travis Kaufman and Nick Dooley
  • Produced with support from The National Science Foundation

How to Use an SPL Meter

This video explains how to use an Sound Pressure Level (SPL) meter.  This is an essential tool for measuring intensity (think amplitude or volume) of a sound.   This is different than our perception of loudness, thus a specialized instrument (the SPL meter) is needed.


Credits:

  • Written and Directed by Nick Dooley and Travis Kaufman
  • Produced with support from The National Science Foundation