YIanni Mitropoulos, a 17 year old male from Melbourne asks on January 1, 2007,If a sound is played to a person over a period of time, is there a way to work out (or to estimate) the percieved loudness of it at any point in time? (assuming we know the frequency response of the person's ear) For example, if there were two pure tones of similar frequency played together, although the amplitudes of each of them might be constant, a person would hear the loudness fluctuating periodically.
viewed 13749 times
Several of the issues raised by your questioner can be answered with material from my Handbook for Acoustic Ecology CD-ROM.
For instance, with the example of the mistuned octave (or other harmonic), these are called secondary beats because they involve no amplitude modulation, but rather a periodic phase modulation. It is thought that this effect contributes to the richness of timbre with multiple instruments. The effect is illustrated on: www.sfu.ca/sonic-studio/handbook/Beats.html
There's a subtler and more important issue raised in the rest of the question - the relation of beats to dissonance. In the past, consonance & dissonance was thought to be only the result of beats betwen partials. The current explanation involves the concept of the critical bandwidth (cb) which describes the resolving power for simultaneous frequencies along the basilar membrane, with maximum dissonance occurring at a quarter cb, and maximum consonance at a full cb. The width of the cb itself changes with frequency range, but in the mid range is a little under a musical minor third. So, as two tones pull apart in frequency there's a progression from beats to roughness to smooth consonance. Consult the diagrams and examples on: www.sfu.ca/sonic-studio/handbook/Critical_Band.html
Using this data, experimenters have built up a theory of musical consonance for various intervals and even with as few as 6 harmonics have reproduced the relative consonance/dissonance ratings of the intervals used in Western music. See: www.sfu.ca/sonic-studio/handbook/Consonance.html
I see two main uses of this psychoacoustic data for music composition: (1) tuning is related to timbre. Note that the example quoted in the previous paragraph applied to a timbre made up of harmonics as in Western music. Therefore, music made with more inharmonic timbres, e.g. gamelan, would naturally result in different tuning system(s). With computer music techniques, precise relationships between pitches can be reflected in their spectra, elaborating the continuum between pitch and spectrum. (2) timbral design benefits from micro-level frequency control, including mistuning, phasing, spectral shaping and so on. The theory sketched above is just a start, and where it leaves off, the sensitive ear has to take over - as it always has!
Note: All submissions are moderated prior to posting.
If you found this answer useful, please consider making a small donation to science.ca.