Vibrations Surround Us: The Science of Music

Baker Tower fills the Hanover air with musical vibrations daily.

Baker Tower fills the Hanover air with musical vibrations daily.

View PDF and Figures

Dartmouth’s campus teems with music. The bells in Baker Tower chime every hour, Professor Brison hosts an array of performances for East Wheelock residents, and the Hopkins Center brings in esteemed artists from around the world. Many students get their feet wet with music classes while others sing or play in ensembles on campus. Everyone else owns an iPod. Performers learn to master the technique necessary to play their instruments, and listeners grow to prefer genres. Yet, music lovers often overlook the foundation of music: science.

Sounds are governed by principles of physics, from the vibration of strings on a violin to the measurements and dimensions that go into the acoustics of a concert hall or stadium. At the heart of the physics of music is the wave, an energy-carrying disturbance that travels through particles in a given medium. Additionally, the way we hear and process musical sounds requires a complex biological system.

A Brief History

The study of sound goes back thousands of years to ancient cultures-Chinese, Japanese, and Egyptian, to name a few-that invented instruments in order to create music (1). Around 550 BC, Pythagorous of Samos invented the monochord using a soundboard and a single string draped across a stationary bridge and a movable bridge (2). By manipulating the location of the movable bridge, he discovered that when the string length was halved, the pitch produced was an octave higher. He later concluded that small whole number ratios were the foundation for consonant sounds in strings as well as in volumes of air in pipes and water in vases. The frequency ratios for a major scale are as shown in Table 1.

Table 1: Frequency ratios are from the first note in the scale to the nth scale degree.

Table 1: Frequency ratios are from the first note in the scale to the nth scale degree.

Aristotle also studied strings and their vibrations about 200 years later. He observed a relationship between the string’s vibrations and the air and proposed the idea that each small part of the air struck a neighboring part (3). He also established that a medium was required for sound to travel. Until Galileo Galilei’s new wave of study in experimental acoustics during the sixteenth century, followers of Aristotle did most of the work in this field, including Euclid, who wrote an “Introduction to Harmonics” (1).

In his 1687 Principia, Isaac Newton contributed mathematical calculus to the field of wave motion study and also proposed the speed of sound to be about 1100 ft/s after studying fluid motion and the density of air (1). In 1711, English trumpeter John Shore invented the tuning fork, a resonator that produces a single, pure tone when struck. In the 1800s, Christian Doppler studied sounds emitted from a moving source and concluded that waves compress when the source moves toward the listener and expand when the source moves away (4). Georg Ohm applied an earlier theorem by Jean Baptiste Joseph Fourier to acoustics, leading to Ohm’s law for sound. This law states that tones are composed of different combinations of simple tones of different frequencies (1). The list of modern contributors goes on.

Vibrations

Vibrations are small oscillation disturbances of the particles in a given body, such as water or a string. Regular vibrations have a defined period, the amount of time it takes to complete a cycle, while irregular vibrations do not, like those created by snare drums or a giant wave crashing on the ocean shore (5). The regular, periodic vibrations have a given frequency, ƒ, in cycles per second.

Waves

Figure 1: Tuning fork vibration: three snapshots within a second.

Figure 1: Tuning fork vibration: three snapshots within a second.

When struck, a tuning fork vibrates at a single, particular frequency. The constant movement of the prongs back and forth causes repeated sound impulses, which are really disturbances in the air (6). The prongs hit the air, and the air continues to hit neighboring air molecules. At one instant, some molecules are close while others are further apart, a phenomenon known as condensation and rarefaction of air (see Figure 1). This pattern’s propagation is a wave, and in one vibration, the molecules move one wavelength, λ. After measuring the frequency, the sound wave’s velocity can be given as V=ƒλ (1). The more common, sinusoidal depiction of a given wave comes from an oscilloscope, an apparatus that senses the pressure from a wave and translates it into an electric signal. Mathematically modeling a wave’s propagation requires a second order partial differential equation.

Instruments produce a fundamental tone, the most audible tone, as well as many additional frequencies above the pitch, known as overtones. The fundamental and the additional overtones form the series of harmonics that give a sound or pitch a certain quality.  A note’s unique quality, or timbre, is based on the relative energies of the harmonics. In other words, each note sounds the way it does because the wave is a complex combination of frequencies.

Vibrating Strings

Musical instruments create sounds through the physical communication of a primary vibrator, resonant vibrator, and a sound effuser (7). These components provide the initial vibrations, amplify the vibrations, and allow the vibrations to escape. String instruments like the viola or cello amplify the sounds made by bowing or plucking the strings. The strings are held tightly around the pegs at one end and the tailpiece at the other end.  When the string vibrates, the vibrations propagate down to the bridge, which carries them to the soundboard that spans the inside of the wooden body (8). The soundboard amplifies the vibration, and the sound waves emerge through the two f-shaped holes. Marin Mersenne, a sixteenth century French mathematician and the “father of acoustics,” devised three laws to calculate the frequency of the fundamental tone produced by a string. Together, they state that the frequency is inversely proportional to the length, proportional to the square root of the tension, and inversely proportional to the square root of the mass per unit length, as shown in Figure 2 (1). As the string is pulled more tautly on a violin by turning the peg towards the scroll, the pitch increases, and as the musician presses on the string, the length of the string decreases and the frequency increases. Cellos and basses have longer strings than violins and violas, which is why these larger instruments can play in a lower pitch register.  Consequently, a musician can produce the same pitch on different string; this causes challenges in determining optimal “fingerings” when studying a piece of music (9).

Figure 2: Mersenne's Laws. T, M, and L represent tension, mass/length, and length of the string, respectively.

Figure 2: Mersenne’s Laws. T, M, and L represent tension, mass/length, and length of the string, respectively.

While string instruments typically have four or six strings, the piano has 230 (two or three per note). An enormous amount of combined tension, up to 30 tons in a concert grand piano, compensates for the great length of the strings. Since the fundamental frequencies of the piano range from 27 Hz for the lowest A to 4096 Hz for highest C, it would be farcical to make the A1 (lowest note) strings 150 times longer than the C8 (highest note) strings (8). Instead, both the tension and the length vary for each note. The piano is especially unique with its equal temperance. Ever since the days of Johann Sebastian Bach (1685-1750), the difference between each key’s frequency has been equal.  This allows pieces to sound pleasing in any key. As a result, the frequencies are not perfectly aligned with the whole number ratio patterns that characterize “consonance,” but they are only a few hundredths of a hertz off.

Vibrating Air

The vibration of air in woodwind instruments to produce sounds comes in two categories: the edge tones of direct vibrations between the musician and the instrument, as in the flute and piccolo, and the vibration of a reed indirectly causing a sound, as in the clarinet and oboe (4). In both cases, the player supplies the energy to cause a vibration. The molecules are set into random motion when the player blows a note, moving the sound forward within the column. Molecules in a column of air, just as in a string, have a frequency of free vibration and can be excited by matching frequencies.  Similar to string instruments, the standing waves produced have a fundamental tone and many overtones. The pitch produced by many wind and brass instruments depends on the embouchure, or shape of the mouth and tongue when creating a note, as well as the keys pressed, both of which alter the size of the column through which the air travels.

Brass instruments are set up much differently. The trombone has a slide to change the size of the air column and the further out the slide, the lower the note. Trumpets have three valves: the first lowers the frequency by a whole tone, the second lowers the frequency by a half tone, and the third raises the frequency by a whole tone (4). Again, the embouchure allows a trumpeter to create a wide range of notes.

Lastly, the human voice relies heavily on vibrating air. The vocal anatomy has three key parts: the lungs for power, the vocal cords to vibrate, and the vocal tract to resonate the sounds.  To produce different notes, a singer varies the tension in his or her vocal cords.

 

Basic Acoustics

Many instruments bring together different sounds to produce rich music in an ensemble. All of these waves travel through the space in which they are played, and the acoustical energy of the waves decrease with the square of the distance from their sources.  For music created indoors, sound waves either reach a listener directly or after reflecting off of other surfaces and losing some energy to the surface; the nature of the surface affects how much energy is reflected and how much is absorbed. Hard surfaces like marble reflect most of the acoustical energy, and soft surfaces like carpet absorb most of the energy (4). In addition, the flatness or curviness of a surface affects how it reflects sound. When constructing a concert hall for a performance, reverberation time-the time it takes for a sound to decay to a millionth of its initial intensity-makes a considerable impact (4). The sound should be powerful and carry, but should not be reflected too strongly as to cause a mesh of auditory confusion. Symphony Hall in Boston has a reverberation time of 1.8 seconds, for example. Even outside the concert hall, rooms must take reverberation and echoing into account, whether they are small conference rooms in which many voices could be talking at once or large lecture halls designed for a single professor’s voice to carry.

 

Human interface with music

Our eyes and ears only pick up a discrete range of frequencies. We can only see the “visible light” section of the electromagnetic spectrum, and we can only hear frequencies between 20 Hz and 20,000 Hz. Yet, within this range of audible sound, the brain can elicit an enormous array of responses.

The Ear

After their journey through the air, sound waves have to travel through three different mediums in the regions of our ears before we fully process them: air in the outer ear, solid bone in the middle ear, and the labyrinth of fluid-filled canals of the inner ear (see Figure 3). The initial tube through which sound waves travel is called the ear canal, and this both collects sound and resonates certain frequencies, which can create an “ocean” effect (1). Sound waves then exert pressure upon the very sensitive eardrum, setting it into vibration. The three ossicle bones-the malleus, incus, and stapes-act together as a lever in the middle ear and increase the pressure on the eardrum by 25 times as the vibration travels through the oval window and into the inner ear (1). The perilymph fluid-filled inner ear is predominantly composed of the cochlea. The two chambers of the cochlea are separated by the basilar membrane, a small strip of skin lined with about 30,000 hair cells, each with many cilia. These hair cells transmit nerve impulses to the brain when they are bent by sound waves passing through, converting the mechanical wave energy into electrical signal energy (8).

Figure 3: Basic ear anatomy.

Figure 3: Basic ear anatomy.

Hermann von Helmholtz explained how we recognize different pitches after sound waves propagate through our ears. There are “strings” on the basilar membrane that resonate on many different frequencies-long strings with low tension on one end and short strings with high tension on the other end (4). When one of these strings vibrates, it triggers a hair cell to send a nerve impulse because it picks up the frequency from the perilymph fluid. In the early 20th century, Georg von Beksey saw that a wave moved across the basilar membrane and had a maximum amplitude at a certain point, which is when the hair cells would fire and send a message to the brain.

Effects of Music on the Brain

We can hear sounds because the vibrations are processed through a receiver. Consider the clichéd tree falling in a forest example: the tree certainly causes vibrations, but sound is associated with how the brain interprets the disturbance that travels through the air (10). Once the vibrations reach the brain, the response of electrical activity can be measured by electroencephalography (EEG). Schaefer et al. describe multiple studies over the last ten years that have used EEG and seen different electrophysiological responses by the brain due to variance in musical characteristics. These characteristics include “subjective loudness, beat or syncopation, complexity of harmonic structure, melodic events, large interval jumps, novelty, and the level of expectations answered or violated in the harmony, rhythm, timbre, and melody” (11).

Some studies have shown that classical music, particularly the works of Wolfgang Amadeus Mozart, has the right combination of characteristics to improve academic performance. This intellectual enhancement is commonly referred to as the “Mozart effect.” First noted by Rauscher et al. in 1993, subjects either listened to ten minutes of Mozart’s K. 448 Sonata for Two Pianos in D Major, a relaxation tape, or silence, and those who listened to the Mozart had higher performance in various spatial reasoning tasks (12).  While all music and sound activates the parts of the brain associated with emotions, a UCLA neurobiologist used MRI imaging on subjects and found that Mozart’s music actually activates other parts of the brain that affect motor skills (13). Some studies have refuted the Mozart effect in the context of IQ testing, perhaps because the Mozart effect involves only temporary stimulation (13).

Not everyone has the ability to enjoy music, however. People who suffer congenital amusia are more or less incapable of discerning different tones. This tone-deafness results from damage to the temporal lobe of the brain. Congenital amusia is specific to music and does not affect language processing (14). Those with congenital amusia not only have trouble distinguishing intervals, melodies, and other pitch relations, but they also have trouble detecting the natural contour of people’s voices. The biggest hindrance caused by congenital amusia is the inability to recognize songs and other environmental sounds, not to mention the inability to remember them or sing them back.

Scientists and musicians seem to view the impact of music on emotions differently. Leonard Bernstein said it best at a Young Peoples Concert with the New York Philharmonic many years ago: “We’re going to listen to music that describes emotions, feelings, like pain, happiness, loneliness, anger, love. I guess most music is like that, and the better it is, the more it will make you feel the emotions the composer felt when he was writing” (15). To scientists, the performing ensemble produces an array of sound waves from its instrumental components-each of which produces one or more pitches with distinct timbres. To the artists, the different chords, cadences, and other musical components form patterns that we associate with variable emotions. Pieces written in minor keys will contain minor thirds, the notes of which are in a six to five frequency ratios, and these chords often convey sadness.  This does not form a simple objective dichotomy, but rather fuels the fire of an ongoing understanding.

Looking Ahead

 

While the physical fundamentals of sound have been well established over thousands of years of study, the neurological effects of music continue to puzzle and excite scientists around the world. Physicians have integrated music into medicine through “music therapy” to ease anxiety and other conditions, visual artists create illustrations using sound through the art of cymatics, and engineers work tirelessly to make the creation of music more accessible and more powerful (16, 17).

At the same time, digital music has revolutionized the way we experience music. The iPod can store thousands of MP3 files and has many more capabilities than the 33-1/3 rpm records used just a few decades ago (4). Also, electronic instruments are getting closer and closer to reproducing the authentic sounds and timbres of traditional instruments, especially for synthesizers and other pianos preloaded with hundreds or thousands of sounds. Digital capabilities will continue to skyrocket. Science has helped pave the way for a multifaceted, exciting generation of music.

References

1.     R. Stephens, A. Bate, Wave Motion and Sound (William Clowes and Sons Ltd, London, 1950).

2.     S. Caleon, R. Subramaniam, Physics Education.42, 173-179 (2007).

3.     A. Cheveign, Pitch: Neural Coding and Perception. 24, 169-233 (2005).

4.     B. Parker, Good Vibrations: The Physics of Music (The Johns Hopkins University Press, Baltimore, 2009).

5.     D. Butler, The Musicians Guide to Perception and Cognition (Schirmer Books, New York, 1992), pp. 15-31.

6.     Sound Waves and their Sources, Available at http://www.youtube.com/watch?v=cK2-6cgqgYA.

7.     Physics of the Orchestra, Available at http://www.sasymphony.org/education/ypc0607/ypc1_guide.pdf.

8.     J. Jeans, Science & Music (Dover Publications, New York, 1968).

9.     S. Sayegh, Computer Music Journal. 13-3, 76-84 (1989).

10.   D. Levitin, This is Your Brain on Music (First Plume Printing, New York, 2007).

11.   R. Schaefer et al., NeuroImage, in press (Available at http://www.sciencedirect.com/science/article/B6WNP-508PPSJ-1/2/00edfacdbc3a4682b7506ea7874ef2a1).

12.   F. Rauscher, G. Shaw, K. Ky, Nature. 365, 611 (1993).

13.   The Mozart Effect: A Closer Look, Available at http://lrs.ed.uiuc.edu/students/lerch1/edpsy/mozart_effect.html#The%20Mozart%20Effect%20Studies

14.   J. Ayotte, I. Peretz, K. Hyde, Brain.125, 238-251 (2002).

15.   Leonard Bernstein- Tchaikovsky 4, Available at http://www.youtube.com/watch?v=AQ3GpUldYvE&feature=related

16.   H. Jenny, Cymatics: A Study of Wave Phenomena & Vibration (Macromedia Press, USA, 2001).

17.   L. Chlan, Heart & Lung: The Journal of Acute and Critical Care. 27-3, 169-176 (1998).

Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *