The Pocket, the Groove, and the Zone are all terms used to describe what happens when a mix engineer gets it right. Each of the musical elements is in balance with the others, and the triad of rhythm, melody, and harmony has formed the perfect relationship. All is right with the world. Whether from force of persona or sheer luck, the Zone occasionally smiles on us, albeit not often enough. But, is there a way to analyze and then codify these elements into a repeatable process? In other words, is there a way to make the art more of a science? Maybe.

The first step toward a better mix is to confirm the stage inputs are optimized. Is the guitar amp mic just off the grille pointed upward at a forty-five degree angle to capture good guitar tone while rejecting floor bounce from nearby instruments? Are the null points on the vocal mics pointed at the horns on the floor monitors to maximize gain-before-feedback? Are the drum mics positioned to minimize interference with each other? Is the direct box on the keys capable of handling the saturated pad signals generated? The best mixes always start at the front of the audio chain.

Gain is not volume; it is more akin to leveling the signals for the console.

Second, check the gain staging for each input. Gain is not volume; it is more akin to leveling the signals for the console. Just as a jet-way in an airport terminal raises and lowers to match different aircraft to the terminal floor height, so the gain control matches every input to the level the console wants to see. Gain should be set so that maximum input extends to just below the clipping point. Typical levels will then reside around the zero mark on the meter with the fader set to zero. This method allows the console the greatest range of usable signal above the noise floor and below clipping.

Third, use board EQ for tonal shaping and to fit the pieces of the mix puzzle together. When a “gaggle” of vocalists presents a wall of sound, break it up by limiting the range each section occupies. For instance, sopranos have their lowest note at A220, so get rid of everything below 220Hz in their channels. Use a judicious cut around 400Hz for male vocals and about 530Hz for females to clear up the “muddiness” that occurs when multiple voices sing together.

A better approach is to use as little compression as possible and only touch the signals that are out of bounds when they do go astray.

Next, look to dynamics control to serve as a third hand. Since digital consoles have compression, limiting, and gating on every channel, it is tempting to “lock down” the mix across the board; unfortunately, this action takes all the energy out of the mix. A better approach is to use as little compression as possible and only touch the signals that are out of bounds when they do go astray. For vocals or lead guitar, set the threshold high enough so only peaks are affected and set attack at 25ms and release around half a second with a ratio of 4:1. Avoid gates for any signals other than toms and noisy instrument amps.

Finally, mentally walk through the mix, listening for each component seen on stage translating to a demonstrable part of the mix. If the keys can be seen playing, but not heard doing so, don’t turn up the keys, turn something else down that is in the way of the keys being heard. Let the hi-hat reside in a narrow band above the acoustic guitar, not in the same region where there will be conflict. Scoop out a place for the bass guitar to shine between the thud (65Hz) and snap (3KHz) of the kick drum by reducing the response of channel one from 125Hz to 1KHz, reserving that area for bass guitar. Naturally, the bass will extend below and above that range, but it is highlighted there.

The perfect mix will always be beyond us; but like light-speed, we can get very close.

SHARE
Previous articleKim Walker-Smith
Next articleMay/June 2017
35 year veteran of the worship technology arena with a passion for excellence balanced by the knowledge digital is a temporary state.

2 COMMENTS

    • Hello Jennifer – thanks for your post! There are a couple of reasons that come to mind. For starters the #1 way people taste their food is via their sense of smell, and the same sort of thing is true for pitch. Our ears allows us to hear the sound of our voice juxtaposed to the other sounds. The term relative pitch implies singing in tune in relationship to something else, and if we can’t hear ourselves that makes it harder. Another factor is that many vocalists learn to sing as part of a choir which is frequently lead by a pianist who stops and starts the various choir sections to make sure they hear their starting note and/or harmonies in relationship to the piano. The pads that many electric worship keyboardists tend to play offer a lot less for vocalists to “grab on to” in terms of their notes, and the way we tend to rush through rehearsal compounds this…

      Solutions?! With that aid of an app like Voice Memos the iPhone, try recording yourself singing over a backing track played on your home stereo. With each pass try turning the stereo up and focusing on the following: 1. find which instrument will best help you find your starting note 2. feel the vibration of the notes in your body so you literally learn to feel each pitch as you sing it.

      Hope that’s a help! God Bless – [WM]

Leave a Reply