top of page
  • Writer's pictureTim Allison

Mixing live performances

Updated: Apr 21, 2020

I've been volunteering at 4ZZZ, Brisbane's local radio station, mixing and mastering live performances post for broadcast. Follow my approach as I tackle headline shows from DZ Deathrays and Angie McMahon; including one critical lesson...


DZ Deathrays



Hailing from Brisbane themselves, DZ Deathrays recently finished up their world tour with a show here at The Tivoli. Playing to a sold out show the five musicians (don't get me started - there's three, but I mixed five artists. I don't understand how either - research ongoing) gave it one last energetic nudge. After investigating one of the songs from their setlist here, I feel ready to tackle the 80 minute recording.


I was provided with the 24 channels taken out pre from the Tivoli's console, and a stereo recording of the crowd from a H6. I followed my usual procedures cleaning the session up, routing, and phase aligning, before gain staging, some subtractive EQ for spill and light compression where necessary. This mix I really wanted to aim to let the mix breathe more and allow the music to speak for itself.


One query I had though, was whether I should tune the lead vocals. There are a number of reasons that it might not be sitting where the artist would like, but whether to take it upon myself to decide seemed inappropriate - whilst ignoring it seemed unprofessional. I put the question out to an online Audio Engineering group. The responses were polarising, with reactions both strongly for and strongly against.

  • "Nobody likes to hear something wrong when they're driving on the freeway. That's why these tools exist, fix it."

  • "Disagree, people expect studio recordings to sound perfect. Leave the humanity in it."


In the end the consensus seemed to be leaning towards using it, but only just touching it up very lightly. The complication came when I tried correcting the lead vox. Due to an excessive amount of spill from the cymbals, wherever the voice corrected, the spill would waver with it, creating an awful effect: for this purpose at least. I decided in the end to leave it off.


There was a lot of vocal editing still required after my first cleanup: it is very sporadic in nature, fluctuating in level and clarity, with lots of sibilance and plosives. It seems every moment there is another moment to correct; such is the nature of live rock performances though. Another round of gain staging helped to tame a lot of them, and to bring it to a more consistent level. I could definitely spend more time on this process, though as I am on a time limit I must split my time accordingly and balance the tradeoff between perfection and completeness, a skill I'm finally coming to terms with.


I decided I would rather spend my time removing this cymbal spill, so that the vocal could cut through the mix more, and stand out from its spill in the other mics, particularly the Zoom track. I really wanted to improve his intelligibility, but these were the problem frequencies in the first place. After much reading, asking others for feedback, and finally experimenting, I found a balanced solution. I first required two EQs just so that I could use multiple bands in the high frequency region. These were surgically subtracting a lot of the problems, especially with a de-esser after them. A compressor was obviously required, but then I ran it back through an EQ putting back in some of the higher frequencies containing the clarity in his voice. Besides this channel, nearly all others I managed to get away with much less plugins than I have been previously using, and I think I really appreciate the result from doing so.


Mixing the guitars was another point of interest from this mix. Three guitars seemed difficult to fit on the virtual stage with vocals, backing vox, and a vocal effects pad, and having them all separated to some degree. Although their style is the 'wall of sound', I wanted to do the live performance justice to their previous records. After, once again, lots of research and experimentation, I came to the result of placing the lead guitar to the right of the vocalist, just slightly wider than where the irregular vocal effects pad lives. I panned the backing guitar to the left slightly, sitting around the toms in a balanced position. Finally, and I think the thing that really tied it all together, was duplicating the third guitar with a slight phase offset, biased towards the left. With lots of spill from a live setting and the zoom always there to draw the imaging back towards the centre, it didn't have the effect that I originally anticipated of wrapping around too much like a stereo studio recording. This would draw attention away from "the stage", and ruin the live aesthetic. In fact the guitars ended up having a lot of phasing issues, requiring a lot of playing around, EQing, and phase shifting. Eventually I solved the problem by de-correlating the two duplicated guitars further. I played with the timbre of them, colouring them slightly differently to provide better imaging and separation.


Additionally, after working on studio songs for the past few months, I had to keep reminding myself to take off that cap and put back on my live hat. I had difficulty sinking the guitars into the stage, always particularly wanting to draw out the lead. Instead I spent a lot of time then going through the performance and riding automation levels: vocals, guitars, the audience, sends. This brought out the music a little more, and allowed me to balance the parts better, as they now stand out where they need to. Once again, I would love to spend more time perfecting this but that balance issue returns.


Finally, I use a sweetening EQ, and run it into my new mastering friend the L2007. Setting the wall to -0.3dB and by this stage I only really had to lightly push it into the limiter to see a reasonably uniform result. Crazy the difference it makes working with professionals. That or my capabilities have progressed quite a lot which is kewl to hear (that and obviously some mastering mojo).


Obviously on that note I was overall quite happy with the mix, it's just a matter of going through and tidying up the nittygritty bits further until I would be satisfied - just not so much as to diminish from that live atmosphere. Balance, it keeps popping into decisions lately, less of the dualist reality that I used to subscribe to. I think it achieved the overall outcome of being immersive from the perspective of an audience member. It provided decent imaging, with nice separation between instruments. I'm the most content with how few plugins I used, and how much more open it sounds as a result, whilst retaining the most important thing: being blasted by a wall of sound. Thanks DZ - that was a treat!



Angie McMahon


Angie McMahon is an artist I mixed a few months ago, but have finally received the feedback from the artist's management team, their comment: "The sound has been slightly slowed down which makes everything sound deep and slow." I suddenly recognised a commonality between an issue I had been encountering for weeks that I didn't understand. I reached out to all corners of the internet, peers, but eventually landed on a solution from Avid.


Here's the issue by way of a Case Description from communication with Avid:

I have been using ProTools on my Macbook Pro 13" 2.7GHz i5 with 8GB DDR3 RAM & running High Sierra . I occasionally run it through an interface (Focusrite Scarlett 2i2), sometimes change the playback engine to a bluetooth speaker (Bose Mini Soundlink), but mostly just use the headphone out from my laptop.

I have had a strange thing happen on a few separate occasions now (different sessions, different days). My sessions have been created in 48k (definitely), and yet at some point (usually when I change the playback engine from memory) it is somehow changed to 44.1. From my understanding it is impossible to change once created. I have looked online and found only a couple of other cases that sound similar, with no solution. So far I have to keep sacrificing some of my work and go back to a previous session from the session backups folder (proving that it was once in 48). 

What is happening? How can I resolve this issue? Please halp. 

They responded with "Please note that Blue Tooth Speakers are not supported for use with Pro Tools", something I can't believe I hadn't considered. They went on to explain "What seems to be happening is that one of the devices is dictating the sample rate on the Mac's Core Audio ultimately setting it inside Pro Tools."


They recommended a few other options; like following this link to remove the possibility of corrupted hardware preferences, and to check for the latest Focusrite drivers. But it ended up being as simple as not using the Bluetooth speaker. What an idiot.


Consequently, I have to revisit the mix... this time with the correct sample rate. I'm taking the opportunity to re-mix it completely, and I'm looking forward to comparing the two to see how differently I approach things.



Stay Tuned

- TA

21 views0 comments

Comments


bottom of page