DURHAM, N.C. -- The use of 12 tone intervals in the music of many human cultures is rooted in the physics of how our vocal anatomy produces speech, according to researchers at the Duke University Center for Cognitive Neuroscience.
The particular notes used in music sound right to our ears because of the way our vocal apparatus makes the sounds used in all human languages, said Dale Purves, the George Barth Geller Professor for Research in Neurobiology.
It's not something one can hear directly, but when the sounds of speech are looked at with a spectrum analyzer, the relationships between the various frequencies that a speaker uses to make vowel sounds correspond neatly with the relationships between notes of the 12-tone chromatic scale of music, Purves said.
The work appeared online May 24 in the Proceedings of the National Academy of Sciences. (Download at http://www.pnas.org/cgi/reprint/0703140104v1)
Purves and co-authors Deborah Ross and Jonathan Choi tested their idea by recording native English and Mandarin Chinese speakers uttering vowel sounds in both single words and a series of short monologues. They then compared the vocal frequency ratios to the numerical ratios that define notes in music.
Human vocalization begins with the vocal cords in the larynx (the Adams apple in the neck), which create a series of resonant power peaks in a stream of air coming up from the lungs. These power peaks are then modified in a spectacular variety of ways by the changing shape of the soft palate, tongue, lips and other parts of the vocal tract. Our vocal anatomy is rather like an organ pipe that can be pinched, stretched and widened on the fly, Purves said. English speakers generate about 50 different speech sounds this way.
Yet despite the wide variation in individual human anatomy, the speech sounds produced by different speakers and languages produc
'"/>
Contact: Karl Leif Bates
karl.bates@duke.edu
919-681-8054
Duke University
24-May-2007