Before discussing equalization or compression, it's essential to understand what frequency is. Frequency is a fancy physics term that helps us describe what sound is and how it works.
Sound moves in waves, and frequency has to do with what pitch you perceive when a sound wave moves at a specific speed. For example, low pitched sounds move in only a few longer waves, while high pitched sounds require hundreds and even thousands of waves. To describe the number of waves a pitch requires to move, you use the word frequency. A sound with a frequency of 100Hz means that the sound creates 100 waves per second. (Hz stands for "Hertz," the technical unit of measurement for the number of waves per second).
You may know some specific frequencies off hand. For example, if you've ever heard an orchestra tuning their instruments to the same note, that note (A4) creates exactly 440 waves per second (440Hz.) If you were to sing that same note, your voice would also create a sound wave that moved at 440Hz.
Most of the time, the sounds you hear are actually made up of multiple frequencies stacked up on top of each other. Depending on how an instrument, or even a person's body is shaped, a 440Hz sound will resonate through those shapes in unique and different ways, generating some extra sounds at different frequencies along the way. This is why every instrument that produces a 440Hz note won't sound the same - they resonate differently.
Equalization, which we tend to call EQ, is all about managing frequencies. Specifically, EQ has to do with increasing or decreasing the volume of only some frequencies.
For example, if you want to hear more of the lower and warmer frequencies in your voice, you would use EQ to increase the volume of just those lower frequencies. Or if you realize your "S"s sound really sharp and harsh in recordings, you might use EQ to isolate what frequency your "S" sounds appear at, and pull that frequency down in volume so it is at the same level as everything else.
Most EQ'ing is done using graphical plugins, which are included with your audio editing software. Even Audacity includes a perfectly functional EQ tool! I'll use that as an example below, since everyone should have access to it.
Notice that this EQ plugin looks a bit like a graph. (You might also hear this type of plugin called a "Graphical EQ," but that is because we have a visual interface to work with instead of a bunch of knobs.) On the left side you should see volume (or sound intensity) units (plus or minus dB), and along the bottom you'll see the range of frequencies the human ear can detect (usually 20-20,000Hz.) There should be a straight line running through the center of the graph that lines up with 0dB. This represents your audio as it currently exists, with no adjustments made to it.
Knowing which frequencies to increase and which to decrease, and how much of an adjustment to make, will always depend on your voice, your microphone, and your recording space. Just copying someone else's settings is not always going to actually help you sound better. How to find out what those frequencies are is out of scope for this article, but it typically involves playing with your own EQ settings and figuring out what sounds best to you.
Compression is primarily used for volume management. Whenever someone listens to a lot of audio, especially over a longer period of time, they are typically going to enjoy a consistent level of sound. Your listener doesn't want to be turning the volume up and down constantly to accommodate the highs and lows of your audio. Compression is one of many ways to deal with this problem, but it's one of the most common.
Compression works by monitoring your audio signal constantly as it plays. Whenever that signal hits a specific volume level, it automatically starts to pull down the volume of that signal, giving you a more even and consistent volume over all. Volume, or specifically the intensity of sound, is measured in a unit called decibels (db). Decibels are always measured in negative numbers, so you should never see anything over 0dB. Anything at 0db and higher means your audio is peaking and no longer measureable by your recording device. Best practice for audio is for it to never exceed -0.5dB, though some will argue for -2 or -3dB depending on the application. It's common for raw spoken dialogue to fall around -15-10dB on average. Everything above that average is headroom to accommodate more dynamic spikes in performance.
The Ratio is how much the volume (or intensity of the sound) is decreased once the compressor starts working. When your signal hits the threshold, everything above the threshold is reduced based on the ratio you set. For example, a ratio of 2:1, means that for every decibel your original signal reaches over the threshold, anything past the threshold is reduced by half. For a ratio of 3:1, any signal above the threshold is reduced to a third of the volume. The higher the ratio, the more reduction that is applied. With that in mind, if you have a lot of extreme high volume spikes, you might want a higher ratio (a higher number on the left side) to keep those under control.
As a note, a very high ratio - something like 100:1 - is called a limiter because the volume past the threshold is reduced so much that it will barely exceed the decibel value of the threshold.
Attack and Release time refer to how quickly after a signal passes the threshold (or stops passing the threshold) the compressor will continue to work. Having a long attack time or a short release time can cause what some people describe as a "pumping" sound that sounds very unnatural.
Finally, the Gain is the ability to increase the volume of the original signal before it hits the compressor. This is common to use when your raw audio is quiet and you want to increase that volume so that it lines up with the threshold. While you can always increase the volume of your raw audio separately, this simply provides you the option to do it as part of your compression process.
A basic compressor does not rely on frequency to tell it when to kick in - just the loudness of your audio signal. (Though there are some that do use frequency, called multiband compressors!) Because of that, it's pretty easy to use the same compression settings as someone else and have them work for you with only minor adjustments. There's no set level for specific voices or applications - it's usually something you figure out over time. I tend to place mine at the volume level where most of my regular speaking voice comes in, that way it only starts pulling my volume down if I start talking louder than the average volume. That being said, it's still a good idea to try a few different settings until you find what sounds best for you and the sound you want to create.
Now that you understand how compression and equalization work, there is also a final question about the signal chain, which refers to the order in which you might use these plugins. While there are always exceptions, in almost every case you will always use equalization before compression. More specifically, I always recommend using a compressor AFTER you've used equalization.
Remember that equalization involves adding to or removing volume from specific frequencies in a signal. This will always impact the overall volume of your signal, which is what your compressor relies on to work in the first place. After equalizing your audio, you have a brand new signal that should be run through a compressor again to accommodate those changes.
It is absolutely possible to use a compressor before you EQ so that you have a nice baseline to start. However, once you use EQ, you have will have different audio signal. You will most likely want to add a second compressor AFTER the EQ to compensate for your changes.
Go Forth and Create Good Audio!
Congratulations for making it this far! Hopefully this article has demystified a little more of the audio production process for you and made these tools more accessible to your work flow. Don't be afraid to use them, just use them knowledgeably!