Mac Music  |  Pc Music  |  440Software  |  440Forums  |  440tv  |  Zicos

Share this page
Welcome visitor
Our Partners

Equalization - Sound Mixing.

May 25, 2004 - by Franck ERNOULD
Translated by Dominique Zbiegiel.

In this second part of our article "The Equalization", we will talk about mixing: how our ears react to the combinations of sounds, the various representations and spectrum analysis, and if it is necessary to add or remove frequencies.... What a task! It is known that our ear has nothing of an ideal sensor. This article gives us the opportunity of raising the veil on a curious phenomenon, called "the masking effect". The " laboratory" example consists of mixing a number of signals with rather close frequencies. As soon as one of these signals "exceeds" the others, it masks them more or less: in other words, the other sound are perceived as much weaker than they actually are, or even not at all. This phenomenon is of course taken into account by all the audio systems of data flow reduction. In practice, it is necessary to remember that our ear really has nothing in common with the linear device used to measure spectrum, or audio analyzer...
Spectrum analysis
    A spectrum analyzer is a display unit with a battery of filters centered (if it is "third of octave" model) on 25, 31.5, 40, 50, 63, 80, 100, 125, 160, 200... Hz (up to 20 Khz). It measures the level for each one of these frequency bands, then displays them as luminous columns.

    This is certainly no "miracle device" for making killer mixes ; however, it is of an invaluable help. MCI (a mixing desk manufacturer) had integrated a spectrum analyzer into its JH-600 models: the first twenty plasma led monitors were then governed by a specific circuit, which enabled them to display the signal spectrum of the mains.

    Why do we use this measuring device? Because on one hand, its way of depicting a sound is universally employed. In addition, it is enough to examine this graphic representation to detect a dip in the middle or an excess in the low registers - in short, the defects that can affect the mixing process.

    Here's a very simplified representation of the mixing process: At the beginning, there is only the bass drum, snare and cymbals, and we had no problem hearing everything. When the bass and toms are added, conflicts begin to appear between the bass drum and lower register of the bass, as well as the toms and the high register of the bass. The more more elements added, the more they tend to mask each other. Equalization will thus be used to "carve out" a frequency range so that each sound can emerge and be perceived by the ear.
    When two Instruments play the same part (thus, the same notes), it is necessary to work on the harmonics to emphasize the characteristic partials of an electric piano, for example, compared to a guitar. It is sometimes necessary to carry out corrections that appear odd in the absolute, but function very well in a given situation.
    For this reason, sounds can appear ridiculously small when listened alone, but fit in their place perfectly within the entire mix. A good example of this are the female voices in Pink Floyd's "Dark Side of the Moon", which were often recorded with two microphones including one out of phase and mixed in mono, or the saxophone solo in "Money". One can imagine the result of such techniques when heard in solo; however, with the 16 other tracks, they work admirably.

    On the contrary, imagine the disorder of mixing a half-dozen tones, correcting them one after the other in solo mode, adding trebles here and lows there... think "effective"!


    Close take
      The extensive use of such corrections is a typical phenomenon in multitrack music recordings. In classical music, or in the good old days of the direct takes of acoustic instruments (jazz or soundtrack movie music in the Fifties), the engineer had only two tracks at his disposal, and relatively few microphones: the balance was thus to be done "acoustically", before the recording, and there was literally no mixing.

      The microphones were placed far enough from the orchestra, and, in big bands, the instrumentalists moved themselves (barefoot to avoid any noise) so their solos stood "in front of" the orchestra, then went back to their seat in the row. After all, the tone of acoustic instruments results from hundreds of year of evolution: those who manufacture them judge them in real size, from a distance and in rooms, with other instruments.

      A classical orchestra of 120 musicians, installed in a good acoustic environment, can thus be captured with two microphones, placed at five or ten meters of distance with nothing being drowned out. Acoustic criteria are met by the composer at the moment of the orchestration, the conductor as he gives his indications, and even the arrangement of the musicians. It is no accident that brasses and percussion are all at the back, whereas the less powerful strings are in the artificial!

      More generally speaking, this situation is familiar while mixing recorded tracks of instruments with tones that were never designed to sound together. Let’s not even mention synthetizers ! Arrangement is obviously an essential factor: mixing a badly arranged piece is a real drag, as all the sound engineers will tell you. It is thus common to make drastic cuts, or completely removing such or such instrument at such or such moment... everything can’t always be permitted!

      It is useless to want a Fender Piano to be heard simultaneously with a Fender guitar playing a sharp tone, or two guitars with close sounds playing together - or then, you have to position them one on the left, one on the right ("Money", in "Dark Side of the Moon") ? too bad for those who keep on listening in mono.

      On the other hand, if the two instruments alternate, the effect can be marvelous. Listen to the beginning of "Breathe" (towards 1' 20)...


      "Positive EQ"
        For a lot of home studio work, an equalizer is only used to add frequencies. Few persons think of exploring the sector marked as "-", which is a mistake! If you are confronted with the problem of making a voice stand out over a layer of synths, you initially think of pushing up the level of the voice, compressing it, or adding high-medium.

        Why not instead remove some substance from the synth layer, precisely in the area where it comes into conflict with the voice ? After all, before the availability of mixing desks, equalizers were passive: they could only remove signal, and were considered "cleaners" ? an approach recently taken up again by Manley in its VoiceBox. The emergence of the "Positive EQ", i.e. the fact of adding gain to a particular frequency using a band-pass filter, was an important turning point in sound recording and much more in the mixing area. This technique was used to create the beautiful days of Disco music in the Seventies.

        It should be pointed out that the problem was not simple, because it was necessary to reconcile at the time a particularly rich rhythm section,i.e. drums, percussion, bass, acoustic AND electric piano, often two electric guitars, all this well seasoned with symphonic orchestrations containing strings, brass, choirs etc.

        To sort out this sound tangle - and as long as possible to emphasize the main voices, positive EQing was used a lot to emphasize the mostcharacteristic part of an instrument, which makes it possible to hear it once fallen into line. The trouble is that the method is unworkable with more sparse music, which requires that each element have its own existence, and thus a spectrum that seems natural (even if it's just an illusion!).

        However sometimes if a "peak" in a certain frequency makes it possible to give a little character to a sound, it is sometimes rather with a discrete attenuation on the good frequency that it can be widened and even cleared up. For example, a female voice with very high notes is often synonymous with excess in the medium register (towards approximately 2 Khz), that the ear can also interpret as a lack of high and low registers. complex relations between compression and equalization: the temporal masking with multiband compression, with examples to back it up.


        The Parametric
          Generally speaking, the use of parametric equalizers follow some empirical rules, especially if you need to preserve a feeling of natural or a spectral balance in the sound. Because whatever the style of music treated, this balance is essential to minimize listening fatigue. From this point of view, the adjustment of the bandwidth (Q) of a parametric equalizer will prove to be critical, but in a different way according to whether you work in attenuation or increasing the concerned frequencies.

          Any attenuation on a broad frequency band, for example (low value of Q), will have to be done lightly, because an open "hole" in the range of harmonics is immediately perceived by the ear as unnatural. On the other hand, by tightening the band it is possible to "surgically" attenuate an invading frequency without arousing the suspicion of sensitive tympanums. Speaking of increasing, it is completely the opposite: the strongamplification of a broad and quite selected portion of the spectrum will be better accepted than on a very tight band, where there again the ear would object. This can be easily explained by looking in the domain of "real" sounds (the alarm clock in the morning for example...)
          Usually increasing a broad band of spectrum is equivalent to an attenuation in the low and high registers as physically moving away from the source. On the other hand harmonic cutting above the rest due to parametric processing has no ideal equivalent in reality except maybe an ear pressed against a pipe, a badly manufactured stringed instrument, or simply the acoustics of an unsuitable piece of music...


          Anecdote
            A small real-life anecdote which illustrates the sometimes important artistic role of equalizations : At the beginning of a recording session in a studio, the musician plays to the engineer a tape recording of the home studio version of a piece.

            What is immediately apparent is an enormous bass sound that constitutes the pillar of the piece and determines its overall atmosphere. After plugging various expanders followed by the individual adjustment of all the sounds before the multitrack recording, Surprise! the bass is now just a shrill sound, whose tone is closer to an oboe than that of a Moog ! After some perplexity and checking of cables, a discussion finally reveals that the "magic" bass sound of the home studio recording was obtained via the equalizer of a numerical console,with totally extreme adjustments, which is impossible to duplicate on a traditional console!

            Because all the equalizers are not interchangeable, and any equipment pushed to the limits will thus leave its unique "print" on the sound... for better or worse! Thus, when possible, it's better to record the sound with its equalization each time when it is decisive for the nature of the piece. On the other hand, what remains obvious is that it is better to not to cut heavily in the spectrum of a sound while recording, if you still aren't sure of the desired final sound structure...


          Reader's opinions
Not registred?
Become a member now! It free and fast and it'd allow you to post news, ads, messages in the forums, change your language/time setting...
Contribute
This MacMusic section is managed by Nantho with the help of the translators team. You are welcome to send your own articles.
14:33 CEST - © MacMusic 1997-2024