Alex Mills is a Brisbane based musician we recently recorded at SAE. I am using this recording to fine-tune advanced mixing techniques, including a 5.1 mix, and finally some analogue mastering...
With a nice back-catalogue and a history of recording in studios around the world, it was a pleasure to work with Alex. He brought only himself and his guitar, along with some pre-arranged MIDI piano, drums, and bass. He has a beautiful voice, and is more than competent playing guitar to a click. I was excited to mix someone of this calibre, so I took the stems away from the day, and got to work producing the song.
My predominant aim for this mix was to finally produce a radio-quality song. Alex suggested a reference track, but as I was not working with him on this project, I aimed not to create his vision of the song, but mine. As such I wanted to employ advanced production techniques, pay predominant attention to the vocal, practice MIDI drum composition, and finally master it to create a polished track more aligned with the pop-folk genre. Throughout this process I also aimed to apply my newly discovered realisation that less is more with respect to processing.
Consequently, I went straight for the vocal, and got to work cleaning it up; this time to a higher standard than I have previously. This involved a lot of clip gain, particularly to help tame sibilance. I found this worked much better than the lazy way of just throwing a De-esser on it. Vassée (2016) comments that De-esser's sometimes pick up the treble of non-sibilant words, pulling down some of the "glitter" of the vocal. He also suggested a great new technique that I couldn't quite get to work in this situation, but I hope to keep practicing with it in future. This involves duplicating the vocal track and nudging it to roughly 50ms earlier in the session. Next place a De-esser on the main vocal track, but key it to the copy. "Every time a 's' hits on the copied signal it will activate the De-esser on the main signal 50ms before the actual 's' begins" (Vassée, 2016). Consequently the De-esser reaches peak gain reduction before the 's' and removes the sibilance much more transparently. I found in this situation it detracted from Alex's performance a little more than I liked, so instead I took a trick from Guy Grey and copied one good 's' sound to all locations that it was too harsh.
I also tried a new approach with EQ, rather than carving out the sound I wanted, I tried letting the tracks remain as natural as possible, placing only a little sweetening EQ on both vocal tracks. I would consider this more of a mastering technique, but I found using it in the mixing stage led to a much more open or natural sounding result; particularly if it is placed after the compressor. It's interesting to see how my mixing has changed now that I understand the mastering process.
Once the lead vocal was treated, I applied a similar strategy to the chorus and backing vocal tracks. One particular technique I employed that I found to work well, was placing a panning plugin on both the chorus tracks, and set them to modulate differently. I hoped this would have the effect of "spotlights in the sky of a movie premier". Initially this was a little too obvious, so I reduced the effect. Interestingly though, once run through a mastering limiter it seemed to glue into the mix very well, so at that later stage I reintroduced the panning.
For the second verse I had two backing vocals panned hard to either side, and I noticed a moment when the "k" is pronounced in "black" that made it suddenly obvious they were even there. Instead of hiding this, I decided to highlight it, adding a cheeky easter egg into the song, and giving it some production value. To achieve this, I automated a delay to open up for just that consonant, set to quick (yet differing) delay times on either side, providing a little spark in the "room" that I thought really worked for the moment.
I attempted to add in MIDI drums to the track, but after much playing around it sounded very low quality. To fix this I need to find some better drum samples, that sound more natural to fit into this style of song. Had COVID not reared its head, I would loved to have spent more time at SAE using the Native Instruments Komplete package.
After much more mixing, experimenting, and research, I had my final mix and it was time to do some upmixing.
5.1 Surround
Finally I had a chance to practice some mixing in surround - something I had been looking forward to for months. I had been reading a lot about the history of this process, dawning from Walt Disney's Fantasia in quadrophonic sound in the 1940's. This saw the first consumer experience of sound enveloping an audience. The animated film toured the states with an orchestra and dedicated tech crew who mixed the musicians live, sending up to 8 discrete channels to loudspeakers surrounding the audience (Robjohns, 2001).
Since then, the industry progressed towards the most effective way of providing multiple
discrete channels of audio. One of the key players, who's technology is still relevant and still developing today, is Dolby. Thomas Dolby is an audio-visionary who aimed to reduce the noise floor of 35mm film; Clockwork Orange was the first film to use this technology (Dolby, 2020). His work then led to an improvement of the encoding and decoding matrixes from quadrophonic sound, making them compatible with mono, stereo, and surround sound. Instead of four speakers in a square pattern though, he discovered a better result from positioning C, L, and R speakers, and then sending a rear channel to numerous surround speakers located behind the audience. Since then, Dolby has been pioneers of the industry, providing larger and more accurate surround arrays, up to the present day of Dolby Atmos which includes speakers on the roof.
Besides Dolby, Ambisonics were the other for-runners of surround technology, though until recently have not been a commercial success. Unlike other multichannel surround formats, their transmission channels don't carry speaker signals. Instead, they contain a representation of a sound field called B-format, which is then decoded to the listener's speaker setup. This allows producers to think in terms of source directions rather than loudspeaker positions, and also offers the listener a considerable degree of flexibility as to the layout and number of speakers used for playback. Ambisonics is known for its full-sphere format, including height information in the Z-axis (Gerzon, 1973).
Slight side-track, but learning about Ambisonics also uncovered some new surround recording techniques for me. First I learnt of their famed Soundfield microphone, a tetrahedral pattern mic capturing sound from all axes with near-flat frequency response, even off-axis. This was created in the 1970's, by the creators of Ambisonics, and still stacks up today with current technologies. So much so that RØDE released their version of the mic just recently in 2016 (SOS, 2018):
Perhaps most interestingly though, was their creation of a surround recording technique, building upon Alan Blumlein's discoveries, which included height information. Dubbed the Nimbus-Halliday array, it features two figure 8 mics and an omnidirectional one, arranged coincident as per the diagram below (Robjohns, 2001):
The issue with the Blumlein pair is that the rear information is included on the opposite side to where it originated: this Nimbus-Halliday array however, mitigates the issue.
But, I digress...
So surround is commonly thought of by the terminology 5.1, but in actuality the capabilities are so much more (particularly for Ambisonics setups). Even in home setups there is capabilities for 10.1 or more, especially with the new Dolby Atmos home theatre. The reason consumers have access to these technologies was made possible by formats such as DTS, SDDS, MPEG, and Dolby Digital. The latter of these is by far the most utilised, rapidly taking over the market for numerous reasons: the least of which being its extremely compact way of storing up to 6 audio channels. Although there are professionals who argue that Dolby Digital is not transparent (Demtschyna, 2010).
With a greater understanding of the surround world, I then researched to advanced my 5.1 mixing techniques. Here are some the key take-aways:
Real vs Phantom Centre
As you by now know, placing an identical signal in a stereo speaker setup, provides a central source for that sound, this is the phantom centre. However with 5.1, there is a speaker dedicated to the centre. But these two techniques definitely do not provide the same outcome, in reality they sound quite different. For film mixing, this channel is used for dialogue, allowing it to easily cut through the rest of the mix. Similarly, this technique is often used within the music world, with many engineers applying this strategy. However, it is often noted that instruments such as bass sound better coming from the phantom centre; not just because of its sound, but also because it's not battling the vocal for space (Robjohns, 2002).
Surrounds
This is debatably the most commonly argued feature of surround. There are some that suggest the rears should be solely for reverb and effects, providing the reflections necessary to imitate a real room. Others find it more enveloping to place instruments there, especially elements of percussion, or a slight divergence of an instrument from the front channels. It seems that the crucial element here, is not providing any sounds in the rears obvious enough to distract the listener or make them want to turn around. Once again, it's a balancing act. As a general rule of thumb, 6dB less at the surrounds is considered ideal, though again, subject to experimentation. (Robjohns, 2002).
LFE
Bass management is the surround sound technology that directs low frequency content, irrespective of channel, only to loudspeakers capable of handling it. Although this is useful, it should most definitely not be relied upon: always create an LFE channel (Waves, 2017). The .1 in 5.1 represents information sent to a subwoofer/LFE. Originally, Dolby had their crossover set to 120Hz, which would still be considered fairly typical, though it can be down at 80Hz (Robjohns, 2002). Originally 120 worked for film applications, but at the time they did not anticipate dance and pop music using these technologies. It is also important to note that due to the trend for in-house surround setups now, the LFE should be used sparingly. This is because they are not typically setup by a professional, and consequently often won't be calibrated correctly.
Compression & EQ
Interestingly, because mixing in 5.1 separates the instruments/channels more, less compression and EQ is required. There are less elements battling for space, in the same area, of the same frequency, and as such nearly all research I found suggests that this is the first thing a mixing engineer might notice upon surround applications.
Panning
Waves (2017) highly recommends not sending the same signal to more than one channel at the same level. This has the result of comb filtering. As a solution, they suggest to differ the levels, or delay the signal by 12-48ms. Another mixing tip is to turn stereo sounds into mono, and then mix them discretely (Robjohns, 2002).
Ultimately, I've learnt that there is no hard and fast rule, it's a matter of opinion, and will most often be decided upon the style of song or the desired end result. It really is a world of experimentation that is far from hitting its peak, particularly with technology progressing and the VR world on the rise. Finally, it was time to put all this into practice with Alex's song.
I chose to put Alex's vocal in the centre channel for clarity, as he was the star of the song. Although it felt a little dry to my ears, so I decided to add a little reverb into that channel as well. This also brought the guitars into it a touch, which helped, at least to my ears, to glue the "front" together a little more. The guitars I panned hard left and right, mostly at the front, but I found a little divergence at the back was more enveloping in nature. I approached the keys similarly, though slightly more towards the back which really helped make me feel like there were musicians all around me.
I found that backing off the compressors was possible, creating a seemingly more dynamic sound in surround. Unfortunately due to COVID I didn't get as much time experimenting in the studio as I'd hoped, with only one session that took about a third of the time to get the speakers talking to the desk in the first place. It also took me a while to become comfortable setting up the session with routing and bouncing. I would really like to dig my teeth into this much more, taking time to separate elements from the song and incorporating them creatively throughout the song. For example, I would love to separate the lead and backing vocals. Keeping the mix relatively narrow at first, and then introducing these accompaniments on the sides to really open up the song for the chorus. Separating the vocal sections that I had originally automated to pan in my stereo mix, would also be fun to play with, maybe introducing some effects or delays creatively to make a really unique surround pop song. It would be even more incredible once MIDI drums (or real for that matter) are recorded and added in for the final polish, as there would be so many options for imaging.
Mastering
Possibly my greatest learning recently has been the skill of mastering. For so long I have produced music that just sounds 'flat', and never understood why. To be fair, it's probably because I'm yet to become a great mixer, but I have learnt that mastering was the missing link. And I don't mean that as in "mastering will save my project"; I mean the skills that I learnt for mastering, I am now applying in reflection to my mixing process, stepping up the quality another level. A great take-away for me was not just learning to master, but finally getting some studio time and hearing the crazy difference high-quality analogue gear can make. The most amazing thing about this process for me was that I have realised, after two years of wondering if my ears were improving, I have finally recognised the distance I have come. The mastering process taught me how to listen for limiters, which in turn has made my practice with compressors improve. Similarly, it tuned my ears to "the musicality" of a signal, making me mix differently now, bringing out the beautiful elements of a song more before it's complete. I can tell the difference between EQ curves on, for example, the BAX EQ unit (shown above) and a shelf EQ in the DAW. My ears are working, and they're getting better, which is really rad to recognise.
Since I wanted Alex's song to sound radio quality on completion, I made the trek to SAE one day to use their delicious equipment. Using the patch bay, I tried various combinations of series and parallel mastering chains, using the Manley Stereo Compressor/Limiter, FCS P3S Stereo Compressor, and a Dangerous BAX EQ. I recorded six different version, using one at a time initially, and then different combinations. Listening back on them (I would love to upload them to demonstrate the difference, but not without Alex's permission), I noticed that the FCS sounded warmer, compared to the Manley, which was much brighter - but both were extremely smooth sounding. I would like to get back into the studio and play with the FCS more though, as it has a reputation for being extremely versatile (similar to the EL Distressor). However without that luxury, and for this purpose, I preferred the Manley.
Before sending it through this hardware, I first applied a sweetening EQ in the DAW, bring up around the 1K, 5K, and 9K mark, and taking out some of the mud around 300Hz. There was also a frequency (I'm assuming from the room recorded in) at around 3.5K that I surgically reduced with a high Q setting and taking out about 6dB. Then I sent this initial "master" into the hardware racks. I made sure to enable stereo mode on the Manley, so that when the compressor/limiter hits any out of phase information, it would not affect the whole mix. This is similar to the MS mastering theory. I also ran it through the EQ, boosting the shelf at 18K, which just seemed to sprinkle magical dust onto the song.
Once I was happy with the result, I played the analogue master next to my master in the DAW, and I couldn't believe the difference. It just sounded like one cohesive song, that all sat together, sparkling, with a polish that I'd never distinguished before. Again, would love to share, but will wait for Alex's permission.
Overall, I was relatively happy with the result. There's always more to do. The first of which upon listening back, is fixing the buzz coming from what I can only assume is the Royer 121 mic. Next I would like to spend more time on the sibilance, until it sounds completely transparent. As previously mentioned, I would also like to add some real drums, or at least find some better MIDI samples to combine. Once this is done I would like to "produce" the song some more, adding flair and little more of me into it, this may include some pads or electronic percussive elements. I would like more time in the studio to master it again, playing around further with the setting on the analogue gear.
In the meantime however, I sent Alex my mastered version, asking for some feedback: "Hey Tim, That sounds great man. Would love to work on this with you after all this covid stuff cools down."
Stoked to have gotten this as a response! Now that I have proved to myself I can create a song of acceptable quality (subjectively), I look forward to keeping in touch with him, and revisiting the song: this time doing it for Alex, with his reference in mind.
Stay Tuned
-TA
REFERENCES
Demtschyna, M. (2010). Sound on DVD. Retrieved from http://www.michaeldvd.com.au/Articles/AudioBasics/AudioBasics.asp
Dolby. (2020). Dolby History: 50 years of innovation. Retrieved from https://www.dolby.com/us/en/about/history.html
Gerzon, M. A. (1973). Periphony: With-height sound reproduction. Journal of the audio engineering society, 21(1), 2-10.
Robjohns, H. (2001). Surround Sound Explained: Part 1. Retrieved from https://www.soundonsound.com/techniques/surround-sound-explained-part-1
Robjohns, H. (2001). Surround Sound Explained: Part 2. Retrieved from https://www.soundonsound.com/techniques/surround-sound-explained-part-2
Robjohns, H. (2001). Surround Sound Explained: Part 3. Retrieved from https://www.soundonsound.com/techniques/surround-sound-explained-part-3
Robjohns, H. (2001). Surround Sound Explained: Part 4. Retrieved from https://www.soundonsound.com/techniques/surround-sound-explained-part-4
Robjohns, H. (2001). Surround Sound Explained: Part 5. Retrieved from https://www.soundonsound.com/techniques/surround-sound-explained-part-5
Robjohns, H. (2002). Surround Sound Explained: Part 6. Retrieved from https://www.soundonsound.com/techniques/surround-sound-explained-part-6
Robjohns, H. (2002). Surround Sound Explained: Part 7. Retrieved from https://www.soundonsound.com/techniques/surround-sound-explained-part-7
Robjohns, H. (2002). Surround Sound Explained: Part 8. Retrieved from https://www.soundonsound.com/techniques/surround-sound-explained-part-8
Robjohns, H. (2002). Surround Sound Explained: Part 9. Retrieved from https://www.soundonsound.com/techniques/surround-sound-explained-part-9
SoundOnSound. (2018). Rode launch their first SoundField mic. Retrieved from https://www.soundonsound.com/news/rode-launch-their-first-soundfield-mic
Vassée, A. (2016). How to beat vocal sibilance. Retrieved from https://www.pro-tools-expert.com/studio-one//5-tips-to-help-combat-vocal-sibilance
Waves. (2017). Mixing in surround. Retrieved from https://www.waves.com/mixing-in-surround-do-and-dont
Commenti