Loudness War in Dance Music | Lost Stories Academy - Blog

LOUDNESS WAR IN DANCE MUSIC

For the past 6 to 7 decades, there has been phantom loudness war, fought amongst the strongest and the mightiest mastering engineers. It all started back when Metallica’s mastering engineer pushed all the faders up a notch and made the audio louder, at the cost of dynamics. This in turn made the people enjoy the track because what this mastering engineer did was no more of a heinous crime.

He exploited the psychoacoustics which in turn made people like that album more than any other albums that reigned during that era. And seriously, I am not even kidding or over exaggerating. This has long been going on, the stupid war of killing the dynamics, saturating it to a point where every sound is sounding warmer than ever, and intentionally clipping the signal just to make it louder.

It is stupid because this kind of music was never recorded in the first place, the way those instruments sounded at the time of recording or even while playing it live was completely different than how it sounds after the entire mixing and mastering process is applied. Some might say that this was good and is only for the benefit of the generic audiences who listens to the recent loud music, but that’s not how every engineer looks at it.

It has done far more damage than one can think. But keeping my individual opinions on the side, let’s get into why the loudness war is not good for the future of the music industry and why the streaming services tried to end this war by implementing a loudness normalization algorithm into their services. So, let us start.

Why did the loudness war start in the first place?

The loudness war started because mastering engineers started exploiting the concepts of psychoacoustics, which suggests that if something is louder, it is better. Mastering engineers thought, if they take the loudness of their client’s track up a notch, it will attract more artists to get their track mastered by them and hence gain more traction. The problem originally started with Metallica’s album, after which, every mastering engineer started putting more effort into squashing the sound, killing the dynamics, and at times intentionally clipping the signal and saturating it so that the audio sounds louder than conventionally how it would sound.

After that, all the mastering engineers started to have a war with one another. Obviously, not with guns, arms, and ammunition, but with the loudness they started achieving on their client’s tracks. They started keeping secrets over how they made their track sound a lot louder than how conventionally it wouldn’t, just to pull over the monopoly in the business. But in this process, they forgot the whole essence of the music, which was never about the loudness, and now it is all about it.

Even after so many years, we are still at it, fighting the war with one another, where the mastering if done heavily will only encourage more fans and the track may end up getting more traction. But in this process, we have forgotten that every other track now is sounding louder and louder and now, in fact, there isn’t enough room space for you to increase the loudness any further.


Why mastering a loud track is not good for music?

To start with, music was never supposed to be about loudness, it definitely was technical to start with, in terms of mixing and mastering process, but when did the primordial focus of the mastering process shift from “making the track compatible to the medium”, to, “making it as loud as we possibly can”.

To understand why it is a heinous crime which we are doing by taking part in the loudness war, first we need to understand the process, which involves certain steps to make your track sound louder.

Compression:

We all know why we use audio compressors in the first place. The main utility of a compressor is to make audio more consistent in terms of amplitude. Another utility would be to tame the transients (transients are the initial burst of energy in an element, which occurs in the first 10ms to 20ms). The whole purpose of taming the transients or the body is to make the compressor act like a transient shaper. But lately, compressors are being used for increasing the gain as well.

For instance, if I want to increase the RMS value of an audio signal, the way you can do it by using a compressor is to compress the element quite aggressively and then increase the makeup gain exponentially, killing the dynamics entirely to make it sound loud.

Saturation:

The more I talk about it, the less it is. Similarly, the more you add saturation in your audio, the less it is. This has been a common notion for all of us who are producers and mixing engineers. Indeed, saturation is useful to increase the harmonic content of the audio signal and hence increasing the RMS (Root-Mean-Square) value in turn. But this only will deteriorate the process of music production, just to make things louder and louder, we keep on adding saturation till the point where the sound sounds a lot heavier and thicker and is extremely harsh in terms of the frequency ranges. Saturation in moderation is good, but not on all the elements, since the purpose of the sound which was recorded or synthetically produced was not this.

Buss-compression:

Like the insert and parallel compression weren’t already enough, another stage of compression gets added in the process of mixing. Although the idea of this compression is just to glue things together, engineers have been using this not only to glue the elements but also to increase the loudness by 1db or 2db.

Maximizers:

The reason maximizers are used is to enhance the loudness of the track, this is said to be the final stage of the mastering and hence will decide the final level or your track. Because a lot of people have a tough time adapting to the industry-standard mastering levels, they keep increasing the threshold values to an extent where they don’t realize that they start adding distortion to the overall signal. At this point, there is no option left for a music producer but to completely rely for loudness on a mastering engineer. In this process, most of the artists and producers and even audio engineers get frustrated.

Clipping:

A lot of engineers intentionally clip the audio signal digitally just to attain loudness. People like Virtual Riot have shown in one of their videos that he has intentionally made his track touch the redline partly because he wasn’t able to achieve that kind of loudness and partly because he liked the sound of it. In both cases, precautions should be taken, because if you’re planning to play these tracks on a loudspeaker setup that is driven by digital mixers, the chances are, you may blow up the entire speaker system.


"When we do all the above things on a track to make the track louder, the dynamic range (which is the difference between the loudest part and softest part of your audio) is crushed. The noise floor increases exponentially, even if you haven’t used any of the recorded samples or elements, the signals which are digitally produced also have some inherent noise that we can never get rid of, which is called the “Quantization Noise.”

 

There are simply too many complications that may or may not occur depending upon the signal that you’re mastering and hence, for me, when we are killing the dynamics and over compressing the signal, we are killing the main essence or the purpose of the music. And it wasn’t just me, but the majority of the people out there were getting frustrated when they were listening to different music from different artists and the loudness of different tracks was seemingly either too loud or too soft. By the charge of an explosion of extreme loudness when the user jumped onto the next track, this made the user constantly monitor the loudness of the track by keeping their fingers on the volume rocker button.


To explain this, shall we use a hypothetical situation where you are the end-user and listening to a track on your phone which is of the country genre?

 

Now, we all know that country music is not that loud and is also very subtle in terms of harmonic distortion which is present in most of the dance music or the hard rock or metal genres. The moment you press the next button on your phone’s app to listen to the next song, suddenly you hear the explosion of the sound coming out from your earphones, you get irritated and immediately start desperately looking for the volume rocker button to lower down the level of the next track, which is of the metal or hard rock genre.

The thing about streaming services was, they wanted the end-user to completely be independent of the music. They wanted the user not to worry about the levelling of the audio so that, once they start a playlist, let’s say in a gym or while driving on a road trip, they don’t have to constantly worry about the loudness. Therefore, loudness normalization was brought into the streaming music industry in the first place.


What is Loudness Normalization?

The streaming services wanted to get rid of the varying loudness of different tracks and thus, they wanted to incorporate something which will make sure that the loudness of one track shouldn’t vary from the loudness of the other track. The only metering system which could calculate the loudness is LUFS (abbreviated as Loudness Unit Full Scale). It was first used by the radio regulatory authorities to make sure that the loudness doesn’t interfere with the modulation index of the waveform. It was developed by EBU (abbreviated as European Broadcast Union), an audio metering tool, to measure the actual loudness of the overall track.

The first streaming service to incorporate loudness normalization was Apple Music. The way Apple Music incorporated Loudness Normalization was, they limited the loudness of all the tracks uploaded on their server at -16dbLUFS. In fact, now the mastering engineers don’t have any control over the loudness unlike previously. This was purposely done with the consent and suggestions asked from various well-renowned mastering engineers and software developers. The way to tackle the loudness and make it uniform was incorporated first by Apple Music, by sooner than we could think, it was being adopted by most of the other streaming platforms, like Deezer, Spotify, YouTube, Beatport, Tidal, and even Pandora. This was done solely of making loudness uniform but involuntarily, it also ended the longest war that occurred in the history of the audio Industry called the Loudness War.


Now, the mastering engineers don’t have any control over the loudness of any track. All they can do is master the track as loud as they possibly can, but it will eventually result in the same loudness range when compared with a track that is not as loud.

 

Because I was very intrigued by the idea of loudness normalization, as most of my clients don’t have a problem with my mixes, it’s just the loudness of the master is what they quite often comment on, and I do get negative reviews about it all the time. So, I really wanted to test and compare how would a track which is unbearably loud sound like when applied the loudness normalization and how would a track which has a lot of dynamic range and is not heavily crushed by the compressors, which doesn’t have enough of loudness would sound like when applied the loudness normalization.

Trust me when I say this, the track which was dynamic was sounding a lot better. This is solely the reason loudness normalization should be adopted on all the platforms. The original essence of the music is coming out. And hence, it would be gullible to say that the loudness war has successfully ended.


Coming back to, why was it necessary to know about the loudness war because now, you can master your tracks. The sooner you realize this, the better it is going to do for your tracks on these streaming services. Now, without worrying about how the track is going to sound, you focus more on the mixing part of it. Hope you have enjoyed reading this blog, I will see you in another blog, thanks for reading this one.


Know more about Lost Stories Academy - India's Premier DJ, Music Production School