Publisher Profile

The Appropriate Pre-master

By: |

[Publisher’s note: This article is presented by Dagogo Senior Reviewer David Blumenstein.]

Always Learning

Having been a self-taught mastering engineer for the last 22 years (among other things), I’ve had the chance to learn some simple things independently which are not necessarily the same as what you would receive with formal training.

Much of this learning has been shared in my books “Desktop Mastering” and “Beyond Mastering” published by Hal Leonard, but those came out at the beginning of the last decade. There are some topics in the books that I’d like to revise for a couple of reasons – one, I’ve learned more since then, and two, technology has moved forward – a lot.

While writing those books, I was very much a dogmatic believer in the Nyquist Theorem. My understanding at the time led to the belief that high-resolution audio beyond 44.1k was unnecessary (but 24-bit pre-masters were still required). This changed significantly a few years back. I detail my high-resolution epiphany in this essay I had posted on the website Medium:

Today, I’d like to share one of the most important aspects of the music production chain – the appropriate pre-master. This is the (typically) stereo digital file that results from audio tracks being mixed and ready for mastering. This is also something I’ve learned much more about since I wrote the books.

The Stages of Music Production

There are generally four stages of production: Tracking, Mixing, Mastering and Distribution. Interestingly enough, these closely reflect the stages of baking a pie.

Tracking is like getting the ingredients together, cutting up the fruit. The fresher and cleaner the source material is, the better the pie will be.

Mixing is like, well, mixing. This is where all the components are blended together, placed in the pan and flavored with various spices. It is important to realize that the freshly mixed and prepared pie is not yet ready to eat – it still needs baking.

Mastering is the baking phase. One common issue with pre-masters the mastering engineer receives from mixing engineers is that they are the equivalent of half-baked pies. This is the result when excessive compression, limiting, and high levels on the premaster make the mastering job more about restoration than enhancement. If the mixed file sounds like it is ready to go on the radio, it is probably not in an appropriate pre-mastered state.

The last stage, Distribution, is like the hot pie on the windowsill, drawing the audience and fans from far and wide.

Dynamic Range

Let’s begin with understanding the concept of dynamic range. In the standard definition, dynamic range in a system is the range of available amplitudes (recording or playback) from just above the noise floor to just below clipping. The levels that one sends to the recording medium should be above the noise floor and (especially with digital audio) below clipping.

The high noise floor of vinyl records compared to the much lower noise floor of digital audio relegates vinyl (and cassettes) to have poor dynamic range compared to CDs, for instance.

I have refined the definition of dynamic range for my purposes. I understand it to be “the softest sounds you can hear while the loudest sounds are happening.”

For example, when you add reverb to some impactful sound: poor dynamic range results in hearing the impact and the reverb each at the same level over the time of the reverb’s decay, then the sound just stops.

With good dynamic range, on the other hand, we hear the initial impact clearly and then the reverb tail fading off into the distance. The soft sounds in the mix and in the music stay soft while the loud sounds can stay loud. If the softest sounds are boosted to match the level of the loudest sounds we have a crushed mix with no dynamic range. This is one of the byproducts of the loudness wars – poor dynamic range. Fortunately, I think those times are now behind us.

The Recent History of Recording

The last fifty years have seen a steady evolution of recording technology and techniques. Pioneers design and learn how to use the bleeding edge technologies (defined as all blade and no handle), forge the hilt for the technology and hand it to their students and apprentices. This now becomes delivered wisdom, not to be messed with.

That set up some challenging situations for both the students that became proficient and the teachers that didn’t learn new methods with the changing times. Rules that apply to caterpillars do not apply to butterflies. When things change, our understanding of how to use those things must change as well.

For instance, a prominent engineer (who had mixed for decades using outboard gear and magnetic tape very successfully) shifted to mixing in the box using digital audio workstation software and faced some challenges from the change.

This producer had learned their chops in the days of magnetic recording tape. The best results using that medium were achieved by sending high audio signal levels to the tape, using it as a form of compression (which was appropriate for the medium). This overpowering of the magnetic tape pushed the average levels up, and away from the tape hiss noise floor that became apparent when printing softer levels.

Then, the prospect of digital recording arrived on the studio scene. There was the advent of early digital systems that used 16-bit recording. With 16 bit audio, it is good practice to use as much of the amplitude headroom as you can (prior to clipping) to avoid low-level quantization distortion.

Quantization distortion is analogous in digital to the problems with tape hiss and the noise floor in analog recording – the softer sounds break up into a soft buzzing sound (this is one of the purposes of dither – to add noise on the least significant bit to reduce audible distortion).

Interestingly, the same workaround for tape and hiss also worked for 16 bit audio and quantization distortion – more level. The main difference with digital audio is that we really do not want to clip the digital domain.

The solution to overcome the problems of both of these recording mediums was to push as much audio level as possible for the best results.

Enter 24-bit audio

Now we fast forward to today’s audio workstations, with 24-bit digital audio recording.

24-bit digital audio allows 16.7 million different levels of amplitude that can be recorded for each sample. This is a significant difference from the 65,536 levels available on 16-bit systems. Remembering the evolution of graphics cards can help with this understanding; 16-bit cards had far fewer colors available compared to 24-bit (millions of colors) cards.

Here’s an exercise I use to help internalize the difference between 16 and 24 bits. If you draw a line between Seattle and Austin and mark it off in 16-bit increments, you’ll get a mark every 128 feet. That same line when marked with 24-bit increments has six inches between each mark. There is a much higher resolution to lay any given sample value.

Looked at a different way, a 24-bit audio file has to be turned down 48dB to match the resolution of a 16-bit audio file.

Back to the engineer who was having hard times with results in the box using techniques learned from magnetic tape and 16-bit recording. The levels of their pre-masters were way too hot, and the returned masters from other mastering engineers were sub-optimal for this producer. They called me to see if I could help, and I was able to explain what I saw was missing in their understanding of the process.

The problem was dynamic range. In mastering, it is easy to make loud things louder, but not so easy to make soft things softer at the same time.

Over the years I’ve found that a softer pre-master (in which a 24-bit file has 48dB more headroom to work with before we run into the quantization distortion issues of 16-bit files) allows for much better dynamic range. Keeping the softest sounds soft, and expanding the louder sounds results in the best dynamic range.

The Appropriate Premaster

These days, I request 24-bit pre-masters with peak levels at -9dB. Previous technologies would be very unforgiving of these levels and result in a pretty noisy end product. It is important to realize that these resolutions and levels are not really for human hearing but to provide enough steps to maintain the ratio between each successive sample and present an optimum level to the gain structure of the mastering chain.

Since many mixing engineers have a broad variety of experience, from self-taught to coming up through the ranks at the most prestigious studios, the quality of their pre-masters is all over the place as well. Mastering compilation and tribute discs really start with pre-masters coming in all states which then with the mastering process are made to fit together as a single release.

My mastering chain gain structure starting point is determined for each track I get by a script I run on incoming tracks so that everything starts at the right level; -9 dB peaks and 96k sampling rate.

My first stage as files come in (after checking the file for clipping or other problems that need to be dealt with before we even start) is to send them through an iZotope RX batch process that first shifts the bit depth to 32-bit float, which ensures accurate ratios between samples no matter what level they end up.

Regardless of the sample rate and levels of the files I receive, the file is converted to 96k and normalized to -9dB peaks. The bit depth increase allows for overall levels to change significantly from whatever state the files came in with without changing the amplitude ratio between samples.

If a pre-master comes in very loud, then the quietest sounds start out much louder than good dynamic range would desire. However, if we start with the soft sounds as soft as we’d like to eventually end up with, the loud sounds can be enhanced and expanded to very high levels while leaving the soft sounds alone (besides applying appropriate noise reduction processes, which also increases dynamic range).

Getting these concepts across to the engineer helped them provide much softer pre-masters which allowed for a much louder master. In general, the softer the pre-master, the louder the result!

Another way to think about mastering is that it is akin to alchemy: turning lead (the pre-master) into gold (the master). When taking lead to an alchemist – make sure you bring pure lead, and not fake gold.


  • (Page 1 of 1)

One Response to The Appropriate Pre-master

  1. Bill Benoit says:

    As I’m often mixing and “mastering” stuff myself (lol, i’m not a mastering engineer), it’s been difficult to NOT try do both at the same time. While this can lead to many problems/bad habits, it has also helped me learn a few things, most notably the -9 target for the main mix buss (without anything on it). I’ve found that things that sound the best after I’ve put compression, limiting etc – when I turn of all the compression, limiting, etc, it ends up around there. Recently, I mixed a song that was going to a compilation and I put nothing on the main buss and it just ended up about -9 average-ish. I’m looking forward to hearing it after someone else with real mastering experience masters it properly. This article gave insight into aspects I hadn’t thought about and it was great to hear some verification of what I think I’m hearing. Thanks!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Popups Powered By :