Understanding phase relationships is an important aspect of getting the best results for your recordings. For this article, I thought I would address something that is crucial to getting great recordings but is often overlooked. Many of you may have heard of phasing, but haven’t fully understood what it means or how to use it to your advantage (and no… it has nothing to do with Star Trek).
Phasing has to do with the positioning of multiple mics on a single source, and the sonic result of the two blended sounds. These two sounds may be recorded onto separate tracks or summed to one for saving track space, but the bottom line is that you need to account for potential phase problems every time you do this. While you can double-mic anything, like guitar amps or a piano, drums are a commonly multi-miked instrument. This being said, I am going to be using a drum set for my main examples. So, put on your thinking caps, and lets learn a few things.
You may be surprised that miking a drum set requires planning. You cant just throw mics up and expect to get the best sounds you have ever heard. Great drum tracks come from a combination of proper mic choices, good mic placement, decent preamps, and, of course, good playing. You will find that different mics sound better in different positions, and this is especially important when two mics are involved, such as a snare miked on the top and bottom heads. Obviously, physics is an inherent part of music, so let me first explain what phasing actually is. If you haven’t already read my article on soundwaves, I recommend you do that before going on in this one, especially if you don’t know much about how sound works.
Every instrument creates soundwaves for each frequency covered in its range, and each frequency has a specific wavelength. These waveforms look like a rising and falling wave, each having a peak and a dip in one complete cycle. A peak occurs at 90 degrees and the dips at 270 degrees, with a complete cycle lasting 360 degrees total. Now, when you are using multiple microphones to record the same source, these frequencies are usually going to arrive at the two microphones at different times, each being in a different part of the cycle. The whole waveform is still accurately represented, but the microphone further away from the source is recording a delayed waveform. With our example of miking a snare drum, the top mic picks up the direct waveform from the surface of the drumhead, but the bottom mic receives a delayed signal as a result of the soundwaves having to go through two drumheads and forcing them to react to it. So you can see where the second mic may pick up the signal in a completely different point of the cycle than the first mic.
Let me expand on this concept again. Obviously, a musical source is going to create a spectrum of frequencies when played, and because each frequency from a source has a specific wavelength, their waveforms will be at a certain positions when they reach both microphones and are converted to electrical energy. Microphone 1, which is closest to the source, will record a certain frequency having only completed half of it’s waveform, whereas microphone 2 (a few more inches away from the source) will pick it up as having completed 1 full cycle of its waveform. Thus, when the two certain frequencies are played back at the same time, that frequency is going to be canceled out completely. This is because one is in its dip and the other is in its peak, both at the exact same moment. Each individual cancellation would then be considered 180 degrees out of phase.
That example is just one frequency from the sound source, so imagine what is going on with all the other frequencies. It is this “phase cancellation” with multiple frequencies at the same time that causes a comb-filtering effect, where there are a series of missing frequencies that create a very unnatural and possibly unpleasant sound. Of course, there will be frequencies that are not cancelled out, but rather amplified because of two identical waveforms at the same point in their cycles being summed together. In other words, both would be at either a dip or a peak at the same time, thereby doubling the power of that frequency. Most frequencies, however, are not completely amplified or cancelled, but are in varying positions within its cycle in relation to each other. There are varying degrees of phase cancellation and amplification going on here, some more noticeable than others. Just remember that anytime the two identical waveforms are not lined up perfectly, there is a certain amount of phase shift going on.
This now brings us to the 3-1 rule of thumb, the double microphone placement rule followed by engineers all over the world. This rule states the two microphones should be 3 times the distance away from each other as one is from the source. This means that if one microphone is 3 inches from the head of the snare, then the second mic should be 9 inches from the first mic. This is a fairly simple rule, and easy to apply.
Having shown you how phase cancellation occurs, you should be able to see how moving one of the microphones can have a drastic impact on the sound of the summed signals. The best thing to do is set up your mics according to the 3-1 rule, then have the drummer play, while you listen to each signal separately. Adjust each one to taste, using whatever compression or EQ you like, and then turn them both on. Repeatedly flip the phase button on either mic preamp and listen to see which summed signal sounds better. The out-of-phase signal will sound unnatural, thin, and lacking low frequencies, while the in-phase signal will regain all it’s low frequencies and sound more like the natural instrument.
Phasing problems can also occur with a single microphone when a reflection off a nearby surface gets back to the same mic. Boundary mics can be placed on nearby reflective walls to prevent these phasing problems, since they lay flat and don’t allow rear reflections to reach it. Of course, the time the reflection takes to get back to the mic, and its volume will determine the amount of phase cancellation. I say volume because phase cancellation doesn’t really occur if the two signals are not closely related in volume. If one is way louder than the other, then the softer one will be drowned out and won’t make an impact on the louder one. Many of these other reflections will lose their energy by the time they get back to the boundary mic.
Outboard processing and EQ can also cause phase problems because both delay the signal even further. I am not saying you shouldn’t use these tools, as they are designed to help you make the signal sound better, but at least be aware of this and possibly compensate by adjusting the position of a mic or adding a short delay to one of the signals (under 10ms should do).
So these are the basics of understanding phase. Like I mentioned before, I used the example of a drum kit (specifically a snare being double miked), but phasing mainly applies to any single source that is miked with more than one microphone with the intention of ultimately blending those sounds together. I would like to think that you all have a deeper respect for this occurrence and will take the time to experiment within your own projects. I guarantee you will be surprised and hopefully better off for what you have learned. Happy Recording!
About Ken Lanyon (Slider)
© 2000, Ken Lanyon, All rights reserved.
(You are allowed to copy and use this essay for your own non-professional use. You are prohibited from distributing copies to others for a fee or for no-charge. You may not publish or quote this essay without obtaining the written permission of the author.)
|copyright © 1999, 2000 by Dan E. Monk All rights reserved.
No part may be reproduced in part or whole without express written permission.