Understanding Natural Sound
Jan 13, 2012 2:32 PM, By Bob McCarthy
Tips for achieving the best acoustics.
One of the most enduring goals in sound system performance is to create a “natural” sound. In the case of sound reinforcement systems, this can be achieved with one simple step: mute the speakers. Within milliseconds we will be reminded that we don’t actually want natural sound. The voice coming from the stage sounds distant and hard to understand. We are suddenly aware that everyone around us is coughing. Without the support of a sound system, the human speakers must talk in an unnatural way in hopes of projecting to the rear. There is nothing natural about actors whispering loudly, but it beats not hearing the lines.
The goal of a sound reinforcement or cinema video playback system is very rarely to sound natural. It is, instead, to sound bigger, closer, and easier to understand than natural. In short, our business is creating magic sound, not natural sound. Our challenge is to maintain the illusion of natural sound so people are not distracted by our manipulations. To succeed, we must not let anything slip that alerts the listener to the man behind the curtain. The ear’s ability to detect even very slight changes in the audio stream is wired into our brains and those of every motile species on Earth. Ask a deer hunter about how quickly your prey can locate you when you make a sound.
This article will discuss some of the key issues involved in the creation of magic “natural” sound and how to avoid having our tricks discovered.
Transmitting Natural Sound
Let’s first spend a moment examining the behavior of natural sound to better know what we are working toward. Natural sound emanates from a source whose placement is definable, such as a voice or a musical instrument. The sound then propagates from this source in a very predictable way that affects its loudness, frequency response, and timing. The point of origin is important because our visual link to the source provides the listener with expectations of what is natural for the given distance. When we see someone speaking at 30 meters, we don’t expect it to sound as if they are right next to us. We know this difference intuitively, but we may not have thought through the acoustical physics in play here.
The inverse-square law, which states that the sound level will drop 6dB for every doubling of distance, governs the level component. This law is broken more often than a highway speed limit. In fact, unless you spend a lot of time skydiving or in anechoic chambers, you have probably never heard this law fully adhered to. Why? Because any reflected energy is added to the direct sound like a sonic recycling program and reused. This causes the level reduction to be less than the 6dB called for by law but not evenly over frequency. The reflections favor the low-frequency range over the highs because at low frequencies the sources are more omnidirectional and the surfaces are more reflective. The opposite is true for the HF range. Even the direct sound has reductions greater than the inverse square law because air is a lossy medium in the VHF range. As direct and/or reflected path lengths get longer, there is more and more VHF range loss compared to all other ranges.
The low-end rise will even occur outdoors, where at the very least we have a reflecting ground plane. The tilt becomes more dramatic for indoor spaces. The longer the distance and the bigger and more reverberant the space, the more we can expect to hear the frequency response tilt lows up and the highs down compared to the midrange. Does this mean a large-volume room such as an opera house will make a voice on stage sound as if the highs are rolled off and the bass boosted in the back row? Absolutely yes, when compared with standing a few feet in front of the singer. They are both natural sound.
Receiving Natural Sound
Our aural localization system can identify the origin of a sound source in both the horizontal and vertical planes. The mechanisms, however, are totally different. The horizontal system is a dual-channel comparator that monitors relative level and arrival time between our two ears (hence the term binaural hearing). For example, a single sound source ahead and to our left will be both louder and arrive earlier at our left ear. The two horizontal systems validate each other in the brain, and the source is conclusively localized. With a single natural source, it is difficult to conjure up a scenario where our brain’s two horizontal systems are in conflict. At the very least this would require a reflection, which is, in effect, a second source. As we will see shortly, we can create localization conflicts with multiple speakers. Our vertical plane localization is two independent single channels. Each ear maps out the vertical plane separately by the learned reflection signature of our outer ear structure, the pinna. Sound coming from above us is reflected differently into the ear canal than sound below and so on. This mechanism is far less sensitive than the horizontal system. When multiple arrivals occur, such as direct sound and a reflection, the conflict is resolved primarily by loudness, with arrival time being a much smaller factor than in the horizontal plane.
Acceptable Use Policy blog comments powered by Disqus