Natural selection as we speak
The forces of variation and selection which shape human language have become issues of extensive research. Documentation of sounds and sound patterns, and their evolution over the past 7000-8000 years allows linguists to quantify the important role of human perception, articulation and imperfect learning as language is passed from one generation to the next. At this year's AAAS conference in Washington, DC, Juliette Blevins, senior scientist at the Max Planck Institute for Evolutionary Anthropology in Leipzig, presents a new approach to the problem of how genetically unrelated languages across the world often show similar sound patterns, without invoking innate mechanisms specific to grammar. Languages as far apart as Native American, Australian Aboriginal, Austronesian and Indo-European show similar patterns of vowel and consonant inventory and distribution, but exceptions to sound patterns regarded as universal show that these similarities are best viewed as the result of convergent evolution.
A new model of sound change shows that evolutionary principles can account for striking phonetic similarities across unrelated languages, as well as the rarity of certain sounds. German and Russian are not the only languages in the world where sounds like b, d, and g lose their characteristic vocal fold 'buzz' at the end of the word. Dozens of unrelated languages, from Afar on the sands of Ethiopia, to Ingush in the northern Caucasus have similar sound patterns.
Why are these patterns found in unrelated languages? Why do languages favour silent p t k sounds over noisy b d g sounds at the end of the word? And why are these sounds common, while clicks have arisen only once in human history? Dr. Juliette Blevins, Senior Scientist at the Max Planck Institute for Evolutionary Anthropology, Leipzig, provides answers to these and many other phonological puzzles in a symposium on Evolutionary Phonology at the 2005 AAAS Annual Meeting, in Washington, DC.
Building on the work of a 16th century Chinese scholar, the famous 19th century Junggrammatiker of Leipzig, and Darwin, of course, Blevins shows that parallel evolution is the primary source of shared sound patterns. As language is naturally transmitted from one generation to the next, human perception and articulation makes certain kinds of sound change (like the shift of final b d g to p t k) more frequent than others. At the same time, people are very unlikely to mispronounce or mishear a simple consonant as a click-sound, providing few opportunities for clicks to evolve naturally.
The implications of this work go far beyond our understanding of vowels, consonants, buzzes and clicks. By showing how universal tendencies in sound structure emerge from phonetically motivated sound change, Evolutionary Phonology undermines a central tenet of modern Chomskyan linguistics: that Universal Grammar, an innate human cognitive capacity, plays a dominant role in shaping grammars. Blevins argues that humans learn sound patterns on the basis of their exposure to hundreds of thousands of examples of them in the first years of life. Where universal tendencies exist, they are emergent properties of language as a self-organizing system.
Other participants in the AAAS symposium on Evolutionary Phonology are Prof. Terrence Deacon (University of California, Berkeley), Prof. Janet Pierrehumbert (Northwestern University), and Prof Andrew Wedel (University of Arizona, Tucson).
Source: Eurekalert & othersLast reviewed: By John M. Grohol, Psy.D. on 21 Feb 2009
Published on PsychCentral.com. All rights reserved.