Practical Realities of Phase Interference

In Uncategorizedby tfwm

Understanding this “accounting system for sound waves” and what it means for your system

It’s a term that everyone involved in audio has heard: phase interference. But what is it? What does it do? Does it really matter? Let’s have a practical look at this important audio/acoustic subject.

Phase is an accounting system for waves. Waves are cyclic disturbances in a medium (such as water or air) and are the means by which energy flows from one place to another. Loudspeakers produce energy waves that propagate through the atmosphere and produce the sensation of sound when they modulate the position of the eardrum in various life forms. No, there is no sound in the forest if a tree falls and nothing is there to hear it!

Phase can be used to track the progression of a wave through its cycle. It provides a point of reference for something that is changing, such as “the phase of the moon” or “Pat’s just going though a phase.”

When two electrical or acoustic waves are mixed together, they will combine to form a resultant wave (the one that we hear). The combination of the two waves (superposition) will produce more sound if they are synchronous — like two people working together on a bicycle-built-for-two. This “in-phase” condition can allow multiple energy sources to work together to form a combined source that is more powerful than the individual ones that make it up.

The individual sources can be about anything — horses pulling a wagon, turbines driven by a water source, cylinders in an engine, people building pyramids — you get the idea. If everything “pushes and pulls” together, a lot more work can be done. The “pushing” and “pulling” is a repetitive action that happens in cycles. The more cycles occurring in a given span of time, the higher the frequency. If the cyclic event is disturbing the air, the higher the frequency, the higher the pitch.

PUSHING & PULLING
If the sources are not synchronous, they can work against each other. In fact, one source can completely negate another if it is pushing while the other is pulling, or pulling while the other is pushing. If the second source is producing this opposite effect because it is cycling in the opposite direction, then the sources have the opposite polarity. Phase cancellation occurs when one source leads or lags the other as they cycle at the same frequency.

Figure 1 illustrates some various phase/polarity relationships between balls spinning on a string. The position-versus-time of a loudspeaker cone can be described in the same way.

For loudspeakers, motors and batteries, polarity reversals are usually the result of reversing the wires to one of the devices. Polarity reversals and phase shifts are two different animals that can produce the same effect in some conditions.

Batteries produce current that flows in one direction only (DC), so polarity (plus to plus, minus to minus) is important but phase is a non-issue.

Alternating current cycles in both directions (regardless of what the arrows on the speaker wire say), and multiple sources should be both in-polarity (same direction) and in-phase (synchronous in time).

Two loudspeakers can radiate the same frequency, but they cannot occupy the same physical space. Since they are at two distinct locations, it is possible to select a point of observation that is equidistant from the two, or closer to one than the other (Figures 2-4). This is the heart of the phase interactions that we experience in sound systems. We’ll keep it simple at first, but I promise to complicate it later.

Let’s start with two identical transducers that are radiating a single frequency waveform. These waves will be phase coherent (in-phase) at the point of observation that is exactly equidistant from the two.

If the point of observation is kept at the same distance but moved to a different angle, the resultant time-distance offset means that one of the signals will lead (or lag) the other.

It doesn’t matter which — the point is that the sources are out-of-sync (not another “boy band”) with each other by some portion of a cycle. This phase shift can produce a complete cancellation of two equal-level signals if it makes the sources oppose each other.

We now have two conditions — the phase-coherent case where the waves reinforce each other, and the phase-opposite case where the radiated waves cancel each other. Between these two extremes there are many intermediate states.

In loudspeaker and microphone arrays, the time-distance offsets can be produced by the discrete physical locations of the individual elements, and the various phase states that exist around the devices produce the radiation balloons that describe the direction that energy is flowing from the array.

If their levels are mathematically summed without consideration of their relative phase, we have a power summation that describes the magnitude of the radiated energy but with no information as to where it is going.

If their levels are summed with consideration of the relative phase between the devices, then a complex summation results that also includes information about where the energy is going.

And unless the wavelength is very large compared to the physical separation of the elements, the presence of an in-phase condition at one point of observation assures the presence of an out-of-phase condition at a different point of observation.

It’s simply not physically possible for an observer to remain the same distance from each device as they circle the array. This can be a blessing or a curse. The effect occurs whether we want it to or not, so audio people must learn to use it to their benefit.

LOOKING AT RADIATION
If all of the sound sources were omnidirectional and could occupy the same physical space, the balloon plots would be a rather boring spherical shape that indicates that the sound energy is flowing out in all directions. This monotonic, isotropic “point” source would not be very useful in a sound system, unless we glue our listeners all over the walls of a spherical room! Not likely, so let’s not consider if further.

The fact that the radiation balloons are NOT spherical is a dramatic illustration of the effects of phase interaction. First, the individual loudspeaker elements use phase interference to create their individual directivity patterns. Second, these “pattern-controlled” loudspeaker elements can be positioned relative to each other in such a way as to use phase interaction to reinforce the sound energy to the listener position and reduce it elsewhere.

For the main loudspeaker array in an auditorium, this means that we want them to push together in the direction that we want the sound level to be maximum, (the audience) and oppose each other in the direction where we want it to be minimum (the stage).

From its name, phase interference sounds like a bad thing. Not necessarily! Sometimes less is better. Phase interference can be used to do useful things like reduce the off-axis sensitivity of a microphone. This is the principle by which all directional microphones operate. A stage full of omni microphones could be a problem!

In a similar way, phase interference can be used to reduce the “spill” onto the stage from an overhead array. This not only increases acoustic gain, but it also makes the system sound better by reducing the amount of sound energy “recycled” through the system as it re-enters the microphones. Conversely, phase interaction can also cause sound to “pile up” on the stage, which unfortunately is a more common occurrence than cancellation.

FOR YOUR CONSIDERATION
Phase interference exists in all audio systems. Failure to consider the complex summation of multiple devices can lead to poor system performance. Some things to think about:

Sound systems are broad band. This means that many different frequencies are radiated and that it will not be possible for all of the frequencies to be summed or canceled at a specific location by the phase interaction between the electroacoustic elements. At a given point of observation, some frequencies will likely sum coherently and others will not. This can produce uneven sound coverage over an audience area — not the absence of sound.

The local effect (at one listener seat) will be comb filtering.

(Figure 5) The frequency response is not represented by a straight, horizontal line, but by a series of peaks and nulls. This produces tonal coloration of the sound — you still hear it but it doesn’t sound like either of the individual elements used stand-alone.

This may not matter for a distributed ceiling speaker system (in fact, it’s desirable). But for a recording studio control room or a good home theater system it’s a disaster.

Acoustic feedback can occur when an “in phase” condition between two array elements at some frequency exists at a microphone. Note that equalization (usually in the form of notch filters) removes the offending frequency at every angle around the array, even at those where its level is perfect. A better solution is to modify the radiation pattern of the array (or the pickup pattern of the mic) by rearranging the individual elements. This can be done by turning off one of the elements or moving them in relation to each other.

Poor imaging is another result of phase interference. The human auditory system utilizes arrival time differences between the two ears to determine where sound is coming from. The absence of comb filters from two summed sources can suggest to the brain that the sound is arriving at both ears at the same time and level, so your brain images the sound source directly in front of you. This is how stereo playback systems produce that “phantom” center channel in the middle of your computer monitor.

Phase interference between multiple sound sources will produce comb filters that can confuse this image location system, resulting in an inability of the listener to discern a distinct location for a loudspeaker in an auditorium. Sound is heard, but it’s just “out there” somewhere and the mix turns to mush at the listener position.

So let’s sum up some of the artifacts of phase interference:
1. “Hot” and “cold” spots in the audience area
2. Tonal coloration
3. Poor speech intelligibility
4. Lack of music clarity
5. Poor gain-before-feedback
6. Poor imaging

In short, these are the problems that plague most sound systems, and unfortunately, none of them can be “cured” by signal processing. The old idea of “we’ll fix it with the equalizer” is delusional. It boils down to where you put the loudspeakers and where you put the people. Phase interaction can be a powerful friend and a formidable enemy. It’s up to you to decide which.

This article was reprinted with permission from Live Sound International magazine, July 2003 issue. http://www.livesoundint.com