NATALIE VAN SISTINE - VOICE ACTOR, AUDIO ENGINEER, WRITER
  • Home
  • Voice Over
    • Animation/ADR
    • Audio Drama
    • Commercial
    • Video Games
  • Writing
  • Audio Engineering
  • Appearances

A Beginner's Guide to EQ and Compression

10/5/2018

9 Comments

 
     Recently, I've observed that a lot of new or amateur voice actors are utilizing more audio editing tools to improve the quality of their recordings. While there are so many fantastic tools that provide an edge to delivering great audio to your clients, I've also noticed that there is still some confusion and even outright misinformation being spread. How to use equalization and compression settings is a topic I see the most.
​
     While, the article below is not exhaustive, I'd like to help clarify why engineers use these tools and how. This is meant for beginners and non engineers without a lot of prior audio editing experience, and who want to learn a grounds up approach.
Picture
Frequency

     Before discussing equalization or compression, it's essential to understand what frequency is. Frequency is a fancy physics term that helps us describe what sound is and how it works.

     Sound moves in waves, and frequency has to do with what pitch you perceive when a sound wave moves at a specific speed. For example, low pitched sounds move in only a few longer waves, while high pitched sounds require hundreds and even thousands of waves. To describe the number of waves a pitch requires to move, you use the word frequency. A sound with a frequency of 100Hz means that the sound creates 100 waves per second. (Hz stands for "Hertz," the technical unit of measurement for the number of waves per second).

     You may know some specific frequencies off hand. For example, if you've ever heard an orchestra tuning their instruments to the same note, that note (A4) creates exactly 440 waves per second (440Hz.) If you were to sing that same note, your voice would also create a sound wave that moved at 440Hz.

     Most of the time, the sounds you hear are actually made up of multiple frequencies stacked up on top of each other. Depending on how an instrument, or even a person's body is shaped, a 440Hz sound will resonate through those shapes in unique and different ways, generating some extra sounds at different frequencies along the way. This is why every instrument that produces a 440Hz note won't sound the same - they resonate differently.
Equalization
​

    Equalization, which we tend to call EQ, is all about managing frequencies.  Specifically, EQ has to do with increasing or decreasing the volume of only some frequencies.
​

For example, if you want to hear more of the lower and warmer frequencies in your voice, you would use EQ to increase the volume of just those lower frequencies. Or if you realize your "S"s sound really sharp and harsh in recordings, you might use EQ to isolate what frequency your "S" sounds appear at, and pull that frequency down in volume so it is at the same level as everything else.

    Most EQ'ing is done using graphical plugins, which are included with your audio editing software. Even Audacity includes a perfectly functional EQ tool! I'll use that as an example below, since everyone should have access to it.
Picture
    Notice that this EQ plugin looks a bit like a graph. (You might also hear this type of plugin called a "Graphical EQ," but that is because we have a visual interface to work with instead of a bunch of knobs.) On the left side you should see volume (or sound intensity) units (plus or minus dB), and along the bottom you'll see the range of frequencies the human ear can detect (usually 20-20,000Hz.) There should be a straight line running through the center of the graph that lines up with 0dB. This represents your audio as it currently exists, with no adjustments made to it.


​

​

Let's say we use the example from earlier of wanting to emphasize the low tones of recording better, while cutting out the some sharp "S" sounds. The example below is somewhat exaggerated for demonstration purposes, but represents the overall idea of how this would be accomplished using EQ. Notice that there is a curve along the 200-400Hz range that increases the volume of just those frequencies, followed a really dramatic notch at about 6,000Hz to get rid of the hiss from the "S"s. With Audacity, this just involves clicking on points on the line and then shaping it by dragging the different points into place. Depending on the software you use, the process might be a little different for you. You may need to research how to accomplish the same effect with your software.
Picture
Knowing which frequencies to increase and which to decrease, and how much of an adjustment to make, will always depend on your voice, your microphone, and your recording space. Just copying someone else's settings is not always going to actually help you sound better. How to find out what those frequencies are is out of scope for this article, but it typically involves playing with your own EQ settings and figuring out what sounds best to you.
​
Compression

    Compression is primarily used for volume management. Whenever someone listens to a lot of audio, especially over a longer period of time, they are typically going to enjoy a consistent level of sound. Your listener doesn't want to be turning the volume up and down constantly to accommodate the highs and lows of your audio. Compression is one of many ways to deal with this problem, but it's one of the most common.

    Compression works by monitoring your audio signal constantly as it plays. Whenever that signal hits a specific volume level, it automatically starts to pull down the volume of that signal, giving you a more even and consistent volume over all. Volume, or specifically the intensity of sound, is measured in a unit called decibels (db). Decibels are always measured in negative numbers, so you should never see anything over 0dB.  Anything at 0db and higher means your audio is peaking and no longer measureable by your recording device. Best practice for audio is for it to never exceed -0.5dB, though some will argue for -2 or -3dB depending on the application. It's common for raw spoken dialogue to fall around -15-10dB on average. Everything above that average is headroom to accommodate more dynamic spikes in performance.
     Compressors take a little more practice for beginners to get used to. Part of this is because different people prefer different settings - there's not a clear right or wrong, and you have to try some different things to find out what you prefer the most.

     For example, the Threshold is the specific decibel level (such as -10db) when the compressor starts to work. If your threshold is set too high, you won't hear much of a difference, other than on only the loudest peaks. If it is too low, your audio will start to sound pretty muffled and squished together. Additionally, your louder moments will never have any chance to sound different than your performance at a normal volume. You're ear will pick up how weird that sounds and you'll want to adjust this setting accordingly. Finding the right place to set your threshold is the most important part.
Picture
 The Ratio is how much the volume (or intensity of the sound) is decreased once the compressor starts working. When your signal hits the threshold, everything above the threshold is reduced based on the ratio you set. For example, a ratio of 2:1, means that for every decibel your original signal reaches over the threshold, anything past the threshold is reduced by half. For a ratio of 3:1, any signal above the threshold is reduced to a third of the volume. The higher the ratio, the more reduction that is applied. With that in mind, if you have a lot of extreme high volume spikes, you might want a higher ratio (a higher number on the left side) to keep those under control.

    As a note, a very high ratio - something like 100:1 - is called a limiter because the volume past the threshold is reduced so much that it will barely exceed the decibel value of the threshold. 

     Tip: If you are just getting started and learning how to use a compressor, it's a good idea to use smaller ratios to start. Ratios between 2:1 and 4:1 are probably best!

    Attack and Release time refer to how quickly after a signal passes the threshold (or stops passing the threshold) the compressor will continue to work. Having a long attack time or a short release time can cause what some people describe as a "pumping" sound that sounds very unnatural.

     Finally, the Gain is the ability to increase the volume of the original signal before it hits the compressor. This is common to use when your raw audio is quiet and you want to increase that volume so that it lines up with the threshold. While you can always increase the volume of your raw audio separately, this simply provides you the option to do it as part of your compression process.

     A basic compressor does not rely on frequency to tell it when to kick in - just the loudness of your audio signal. (Though there are some that do use frequency, called multiband compressors!) Because of that, it's pretty easy to use the same compression settings as someone else and have them work for you with only minor adjustments. There's no set level for specific voices or applications - it's usually something you figure out over time. I tend to place mine at the volume level where most of my regular speaking voice comes in, that way it only starts pulling my volume down if I start talking louder than the average volume. That being said, it's still a good idea to try a few different settings until you find what sounds best for you and the sound you want to create.
Signal Chain

    Now that you understand how compression and equalization work, there is also a final question about the signal chain, which refers to the order in which you might use these plugins. While there are always exceptions, in almost every case you will always use equalization before compression. More specifically, I always recommend using a compressor AFTER you've used equalization.

    Remember that equalization involves adding to or removing volume from specific frequencies in a signal. This will always impact the overall volume of your signal, which is what your compressor relies on to work in the first place. After equalizing your audio, you have a brand new signal that should be run through a compressor again to accommodate those changes.

    It is absolutely possible to use a compressor before you EQ so that you have a nice baseline to start. However, once you use EQ, you have will have different audio signal. You will most likely want to add a second compressor AFTER the EQ to compensate for your changes.
 Go Forth and Create Good Audio!

    
​Congratulations for making it this far! Hopefully this article has demystified a little more of the audio production process for you and made these tools more accessible to your work flow. Don't be afraid to use them, just use them knowledgeably!
9 Comments
Joshua H
12/3/2018 10:51:36 pm

"Hopefully this article has demystified a little more of the audio production process for you...", yes, yes it has, Natalie. Thank you for this visually interesting read!
Your speech is plain. The segments and their titles made it easy to pay attention and follow everywhere you took me: These are frequencies. Equalization is how to manage each frequency. Compression is how to manage the volume of all frequencies. In most cases, do equalization first then compress. Be it video or article, your teaching prowess is strong!
The content is so well organized and clear to read, I just can't move on without pointing out typos. Natalie, you can totally erase this section once the errors are fixed.
Under heading "Equalization"; third paragraph: Even Audacity includes a perfectly [functional] EQ tool!
Under heading "Compression"; first paragraph: Compression is one of many ways to deal with this problem [; it's also] the most common.
Under heading "Compression"; last paragraph: I tend to place mine at the volume level where most of my regular speaking voice comes in, that way it only starts pulling my volume down if I start [talking] louder than the average volume.
Under heading "Signal Chain"; last paragraph: You will [most likely] want to add a second compressor AFTER the EQ to compensate for your changes.
I was left with a few questions, especially about compression. First, why wasn't the "noise floor" slider defined? I reasoned if "threshold" is the specific decibel level when the compressor starts to work, then noise floor must be the specific decibel level when the compressor stops working. Still, considering the audience, explaining everything would be best.
Another question involves music recording. Does compression and equalization for music tracks work the same as voice? Understandably, the examples provided were for voice acting. I haven't seen any articles concerning music on your site; show me where if I've missed them. This would be a natural continuation in this brief series for beginners on these subjects.

Reply
Natalie Van Sistine
12/4/2018 07:14:11 am

Thank you for such a thorough and detailed comment! I really appreciate the suggestions (which I have implemented! You caught my dyslexic word switching habit, haha) though I don't have the ability to edit them out though myself.

To answer your questions though!

- Not every compressor has a "noise floor" function, that is just a functionality that is unique to the Audacity (and potentially a few other) compressors. Most plugins tend to have a base set of features and then occasionally provide a few supplemental extras. A noise floor typically means that anything below that DB level is either reduced in volume or completely silenced. It is typically used to mute background noise between desired sound in a recording. I'm assuming that it functions the same here, but it's typical to explore and research the specifications of a plugin if they include additional functionality beyond expectation. Because this tutorial is about Compressors very generically, I made the call not to go into detail about the Audacity specific features.

- EQ and compression absolutely do work the same with music as well! Plugins can be used for any audio application and music is never really considered separate. I also don't have any tutorials on music because my training was in post production and broadcast applications. I can work with music okay in a pinch, but I can't speak from any place of authority and have tried to keep my tutorials more directed as a result. That's not to say you can't apply this tutorial to music production, but I can't speak as directly to a musician/music mixer's needs and application.

Let me know if that helps you at all and thank you again for your feedback!!

Reply
rohit aggarwal link
11/9/2019 12:06:38 am

thanks for the information

Reply
j
11/12/2020 02:04:24 pm

j

Reply
dhanveen link
7/5/2021 05:02:31 am

Really This Was Great Information. Thank You For Sharing Valuable Information.

Reply
Man Made Diamonds link
7/8/2021 10:02:13 pm

Thank You For Sharing Such A Valuable Information. Really This Was Very Informative Article.

Reply
Ven S
7/23/2021 06:23:05 pm

Great explanation- Thank you! Clear, without becoming too technical. Searched through a lot of sites before I found this.

Just one other comment- You say that EQ should always come before compression, which makes sense, but please note that others such as the Berklee School of Music hold different views. See https://online.berklee.edu/takenote/eq-before-or-after-compression/

Reply
sagardigital link
10/18/2021 03:33:13 am

Thank You For Sharing Wonderful Article.

Reply
Joshua Cummings link
10/6/2022 09:57:31 pm

Their firm focus. Pay improve rich business project.
Certain weight officer condition total break somebody. Defense cover and building month past. Itself above agency campaign from pass mean them.

Reply



Leave a Reply.

    Welcome!

    I primarily post tutorials and reviews related to recording voice over and editing audio from home. Thanks for visiting!

    Categories

    All
    Conventions
    Review
    Tutorial

Find Natalie on Social Media:
  • Home
  • Voice Over
    • Animation/ADR
    • Audio Drama
    • Commercial
    • Video Games
  • Writing
  • Audio Engineering
  • Appearances