Pitch and harmony detection in the auditory midbrain
Before 1998 it was unknown that the unique neural map of acoustic frequency in the auditory midbrain (inferior colliculus) is a functional adaptation for pitch extraction from complex tones.
The vocalization sounds of many mammalian species and the vowels of human speech consist of a series of harmonics (e.g. 900 Hz, 1200 Hz, and 1500 Hz). They are unified in the auditory brain of these animals into a single percept of one pitch (an equivalent of 300 Hz in the given example). This spectral synthesis supports the localization and the identification of single sound sources in natural, noisy environments.
The auditory midbrain is adapted to this task by its anatomy of stacked neuronal layers. Each layer processes sound signals from a specific bandwidth of the acoustical spectrum, and the separation into bands is optimal for the neuronal combination of harmonics that is needed for pitch extraction.
Because the auditory midbrain is hardwired for the processing
of low-order harmonics in vocalization sounds, it is also hardwired for
the processing of harmonic spectral components in music. The mechanism
that provides pitch detection in speech does the same job in music as
well. Our brain prefers harmonic tones and harmonic tone combinations,
because it can extract more information from them.
A testable account of a plausible mechanism underlying the major-minor perception is outlined here.
The bells of Zeng from 433 B.C. as an early example of the preference of the frequency ratios 5:4 and 6:5: Tuning system.
The gamelan pelog scale of Central Java as an example of a non-harmonic musical scale: Interval distribution.