What's new

Working with surround and Spitfire libraries

europa_io

Member
Hi -

I've been working with a 5.1 (well 5.0 really) approach with Spitfire Libraries for some time now as I prefer the sound I can get, and with the thinking that I can always down-mix to 2 channel, but can't do the reverse satisfactorily (other than fudging it with an up-mix plugin).

But it is a workflow overhead and just makes things a bit over-complicated when I don't have minions like the big kids.

Recently I've been getting asked to deliver stereo stems or mixes to the dub for TV work, and they do a 5.1 upmix with Halo Upmix or similar if their delivery format requires it. So I kind of wonder - why bother with the extra effort for small sonic benefit?

Options:
Stick with 5.1 right up to the last step as it has the most flexibility and most satisfying sound while working, (with gallery mics panned to the rear for instance), albeit with more pain and less access to really nice stereo-only reverbs etc.

Or, work with a satisfactory 2 channel mix of instruments end-to-end which includes gallery mics blended into the front from the outset, and work 2 channel throughout.

I'm trying to convince myself to do the latter...

What do you think?

Interested in your approach @christianhenson - I don't think it's something you've covered one of the Spitfire videos..?

Thanks!
 
Hi

Yeah I've tried both approaches and found that routing everything in 5.0 just created more work for not-so-much return. Especially when mixing with other libraries and stereo/mono instruments like synths it becomes problematic. I don't have an all-spitfire template so this probably contributed to my frustration. I also found that the galleys in the surrounds easily get lost once on the dub stage, especially once atmos/fx and 3db is dropped and I think they sound better in the front if the extra reverb is needed.

My current approach is to compose and produce everything in stereo and once the cues have been approved, mix everything in 5.1 as required. When I get to this stage I buss everything out the the main stems for surround mixing, which then filter down into the final delivery stems. I use either surround verbs like Pheonix/R2, or two stereo reverbs - one front and one rear. Its offers more consistency when trying to blend different libraries and instruments. I also use Halo mainly for synth/sample and ambient sound design based sounds. I found using it on orchestral samples that are mainly geared for a stereo mix meant that they became too thinned out spread throughout the soundfield as it essentially derives and spreads everything out through all the channels from the stereo source. I find it also does this with most sounds and plan to experiment with using Trevor Morris's technique of only adding the centre and surrounds to the existing stereo mix (essentially adding the extra information to the untouched stereo sound). I need to test this though as I wonder if there could be phase issues when folding back to stereo. One Halo great for though is deriving an LCR from the stereo instruments that folds back to stereo perfectly as long as the correct coefficients are used in exact mode (-3db centre in my case).

I guess after testing different surround approaches on the last few projects, I come to realise that the surrounds are immersive ear candy and are probably best treated that way. The inconsistencies of theaters and playback systems means that there's no guarantee that the audience will even hear it as intended. Also, things can end up being changed during the sound mix, so if you have important information in the surrounds or even centre (like the galley mics for example) they could easily be lost with the pulling down of a fader. So now I try and make sure the really important stuff is int the front left/right/centre (where applicable) and then expand out from there when needed.
 
Thanks Sekkleman for taking so much time to respond. Much appreciated.

Yes - you're right it's probably just immersive ear candy. That's exactly my addiction! :) My brain is telling me there is little discernible benefit when this is folded down (though I can tell the difference), but my ears/"heart" are telling me it feels so much nicer.

I have yet to find an approach or develop the balancing skills to make a stereo downmix that was surround until the very last step sound as satisfying as a stereo-all-the-way mix.

There might be a little emperor's-new-clothes-effect going on inside me as I A-B the options, but more than a little bit of me is saying to myself "trust my ears, not just my brain".

Thanks Sekkleman. It would be great to hear back about your experiments with Trevor Morris's technique at some point.

Cheers.
 
routing everything in 5.0 just created more work for not-so-much return


Another way to do this is a "poor man's surround," also known as "not really surround but...." This may sound dumb but hear me out because it gives the dub stage all the control they may want and may still be (somewhat) satisfying to you.

Compose however you like -- if you like listening in surround great! -- but PRINT into two discreet stereo pairs:

1. Pair 1 is the "main" stereo track,

2. Pair 2 is your (the composer's) proposed surround set of tracks (left and right rear speakers).

The problems with delivering a 5.1 mixes are, as you may agree, numerous. The stage may put your mix through a black box you don't own, or with settings you can't anticipate or feel are wrong -- but even if you're there your chance to intervene is politically an issue and may be unsuccessful anyway.

Even if they are using a 5.1 algorithm or a box you do have today, they might decide without telling you to route the music for next week's episode through the new, exciting "BlamiSurroundO-Maximiser."

Other problems with trying to supply 5.1 music abound -- not infrequently, they don't want anything musical in the sub channel as they reserve that for "booms" and other Real Loud SFX; they will somehow manage to get your 5.1 stems mixed up or out of phase or simply canceling each other in some unexpected way (this happened on a feature film to me -- the director called and asked what the h___ I was doing because music had been strangled, epically).

With the "two pair" approach, you know that there won't be phasing problems even if something is off -- latency to the rear send for example that doesn't quite match the left and right front sends.

When using the two pair approach, sometimes there might be little or nothing in the surround stem. I usually reserve it for quasi-effects, or maybe shimmery tremolo high strings, a faint choir sound or doubling of synths with choir, or something else musical that, while nifty, is not going to wreck the stereo mix if it's poorly adjusted, hard to hear, or even inaudible.

Wow -- too long a post but anyway that's my take.

If you are working with a major studio these considerations are not the same / applicable.

Kind regards,

John
 
Last edited:
The inconsistencies of theaters and playback systems means that there's no guarantee that the audience will even hear it as intended. Also, things can end up being changed during the sound mix, so if you have important information in the surrounds or even centre (like the galley mics for example) they could easily be lost with the pulling down of a fader. So now I try and make sure the really important stuff is int the front left/right/centre (where applicable) and then expand out from there when needed.

Excellent advice in an excellent post.

The inconsistencies of theaters and playback systems means that there's no guarantee that the audience will even hear it as intended.

I've experienced this one; the rear speakers in the local Ultraplex can be plus or minus 6dB (or more) from spec, and even if they are within spec range, they may not sound anything like what we anticipated.
 
  • Like
Reactions: rlw
Thanks Sekkleman for taking so much time to respond. Much appreciated.
Cheers.

No worries, I hope it helped a bit!

I've experienced this one; the rear speakers in the local Ultraplex can be plus or minus 6dB (or more) from spec, and even if they are within spec range, they may not sound anything like what we anticipated.

Yeah absolutely. Even the difference between hearing my 5.1 mixs on a dub stage or in a theatre compared to near field monitoring in my studio was quite an eye (ear) opener as it sounds quite different in a large room with multiple surround speakers.

they will somehow manage to get your 5.1 stems mixed up or out of phase or simply canceling each other in some unexpected way (this happened on a feature film to me -- the director called and asked what the h___ I was doing because music had been strangled, epically).

This is also a also a concern for me. There's always room for human error when importing stems depending on the system and workflow and its something I've learnt to always check. As you say, channels being slipped out of phase either in the sound mix or via latency in a badly calibrated theatre can also be an issue. I once read an interview with Alan Meyerson where he mentioned he never pans/positions sounds between the front and rear channels for this exact reason. Anything going to the rear is a discrete audio stream so even if it's slipped out of synch, it will still work. This is something that's made me really wonder about using Halo for upmixing anything into the surrounds. If the channels ended up out of synch, it could cause weird phasing issues especially if it's then folded back to stereo for deliverables. I still have been using it for ambient beds however I'm using surround reverb more and more.

After going into surround mixing pretty gung-ho when I first started. I've ended up being pretty conservative so as to minimise the chance of running into all of these issues. It's not a bad thing though, it just means there's more limitations which forces us to be creative with how we do it! It's a constant learning experience that's for sure.
 
I have a bunch of posts on this topic on this forum - and I'm by no means an expert, or even doing it "right" - but I agree with what JohnG said, and I basically do "quad" instead of true surround. This is mainly because Logic still only has a single set of "surround" outputs, so doing surround AND stems involves the same kind of work-arounds that you'd need to use when attempting surround on a big stereo console like an SSL 4k - basically using tons of stereo pairs.

So what I wind up with is front pair / back pair and the center and LFE usually empty. (not always, but...)

While composing I basically leave the rear speakers off, and then only during mixing do I get them fired up. My rear pair usually contains not the actual rear mics from fancy libraries, but instead is just a different / longer reverb that's fed from sends off of the front pairs of various instruments. I also use a rear ping-pong delay with different settings than the front ones. This lets me have some sounds that have longer hang time in the rear speakers which can sound very huge.

In some crazy sound-design-y cues I also "quad track" instruments so there's four different performances spread across the four channels - much like you'd double-track guitars and then hard-pan them left and right in a stereo mix. This can sound amazing.

But I'm always cognizant that the re-recording mixers may lower my rear pair or discard them entirely at any point, and maybe even toss them and then build their own version with an up mix plugin - so I make sure the mix sounds the way I like it when I'm only listening to the front pair. If they do use my rear pair it's a happy bonus.

For television I only do stereo, mostly due to the time constraints - and the workflow is so much easier!
 
Hey Charlie, I've read all of your posts about this over the years and it's been great to getting insights into how you deal with surround! It's definitely had an influence on me I think the approach that you and JohnG take is a really smart move. I'll probably try it on my next project.

One thing I've never been able to find a definitive answer on is the whole centre channel debate. All the sound mixers I've delivered to have always insisted on centre channel information as they worry about the proximity effect in a large theatre. That being said I also have colleagues who have worked with other mixers that don't want anything in the centre. I guess it's really down to the individual mixer at the end of the day.

After trying heaps of ways to create a centre channel via discrete methods it always ended up feeling less 'stereo' and immersive when folded back for the stereo mix deliverables. Since most people are going to hear the mix in stereo this was always a concern. Finally I figured out that Halo offers a 3.0 upmix from stereo sources in exact mode that folds back perfectly to the original stereo sound. The cool thing is it has a slider that allows you to choose the percentage of the sound that gets sent to the centre.The other interesting thing is that the frequencies it derives from the stereo and puts in the centre is kind of rolled off and doesn't get in the way of dialogue. It can also be automated around dialogue although I've never actually used this feature. After a bit of trial and error I found a sweet spot where the mixer would see and hear the centre info and be happy. It meant I could still write everything in stereo and treat the centre channel as part of the whole surround mix at the end. I usually only put the drum buss and bass buss stems through this 3.0 process as they tend to be the main elements that really make any difference due to their low frequencies. The only other thing that goes in the centre are featured lead vocals or solo instruments which are always discretely panned.

It seems that we don't really have any control over what happens to the music mix once it's delivered in stems. All we can do is deliver stems that have the least chance of loosing important musical information which are predominately in the left and right channels.
 
One thing I've never been able to find a definitive answer on is the whole centre channel debate. All the sound mixers I've delivered to have always insisted on centre channel information as they worry about the proximity effect in a large theatre. That being said I also have colleagues who have worked with other mixers that don't want anything in the centre. I guess it's really down to the individual mixer at the end of the day.

Yes. Talk to the dubbing mixer early on if you can. Otherwise I leave the centre empty and let them winkle something in there if they like.

It seems that we don't really have any control over what happens to the music mix once it's delivered in stems

Sadly, very true.

I once attended a dub of a project that was not my own -- the composer himself was not there, and don't even remember his name now -- but as we were listening the music sounded wrong, so I asked the producer (politely) whether all the tracks were turned on, then he asked the dubbing mixer. The dubbing mixer said, "no." Of the approximately eight tracks, he had muted all but two. Although they did turn some of the tracks back on after that, they didn't turn them all on and they were not at unity.

Yikes.
 
Yes, always talk to the re-recording mixers before deciding on your channel layout if at all possible. Most of the movies I've done are not exactly subtle, delicate, immersive soundscapes - there's always a ton of sound design, and it's usually guns, torture machines, and people screaming as their arm is getting ripped out of the socket! For that reason, when I ask if it's okay that I leave the center channel empty of music, most of the mixers have breathed a sigh of relief and said either, "I won't tell anyone if you don't" or "Oh thank you thank you thank you!"

They've always told me that if I don't have any "legitimate" center info, such as the center mics from a multi-channel orchestral recording, and if I'm just creating center information by folding things from the front stereo pair into the center, that it's fine if I just let them do it via Halo or some similar process. They can better judge when and how much of the L+R to fold into the center to eliminate a hole in the audio when it's played in a big theater.

If I'm creating center or LFE info by picking and choosing elements to play up the center or send to the subs, like bass, solo cello, or whatever, again the mixers can do this better if they're given enough stems that have the desired elements isolated. On earlier films I did when I only printed three (!) 5.1 stems, I would send some elements within a stem to that stem's center or LFE channels - and this worked, but was more difficult for the mixers to "untangle" than it would have been if I had just given them more stems with fewer channels per stem. In a 48-channel delivery, some mixers might prefer to have twenty-four stereo stems while others prefer eight 5.1 stems, and still others might prefer twelve quad stems. So it really depends on the material and the personnel - and it definitely pays to have detailed conversations and even send rough prints of early cues if the mixers have time to check them out. I try to do that if I can - send the mixers a cue or two a couple of weeks before the dub and let them throw them up in the room. Of course, the big boys might have a lot of back and forth with the dub stage as the score comes together, but on most of the projects at my schedule and budget level I'm lucky if I can make this happen, but it always makes me feel a little more comfortable when the mixers have checked out a cue or two before I print the whole score.
 
Great idea. Thanks Charlie.

The quad-ping-pong thing can work really well on a wide variety of source material, and can even produce a "spinning" effect if the delay times are lined up right. I don't use an actual surround or quad version of an effect plugin, just two normal stereo delays - but it can sound wicked!
 
@charlieclouser and @JohnG and gang,

Not wanting to derail too far, but one thing I found interesting was to use the surround panner inside Kontakt. I found that I was able to make things ‘surround’ that were not really intended to be, sometimes to good effect. Playing with some of the panner parameters gets interesting. Tried it with ‘una corda’ for example and it was fun, not quite like a room verb surround, not quite front stereo/back stereo but something else instead- and usefull for some things. Maybe it would work on orchestral stuff that was stereo only ? Haven’t tried that yet.

Also, I find reaching for those true surround synths, like absynth and structure interesting as well. You have to watch the Absynth outputs, they don’t match Logic’s, so you need to swap them to match. Absynth does some fun things like circular swirling etc.
I’m set to work in quad and use the internal Logic busses, but I simply turn off the center channel speaker icon. When you use a multi mono plugin on a surround track, you can set the A B and C parameters to control different channels, so A for fronts and B for rears for example, all in the one plugin. So this way you can set up say, crystalizer from Soundtoys to have slightly different settings for front and rear all in the one plugin and get nice quad effects without having to set up two aux channels etc.
 
Since Logic's "surround" capability is only useful when outputting a single composite surround mix from Logic, you can't use the nifty little surround panners if you need to output multiple stems at once to multiple arrays of hardware outputs. This is a really serious problem for me, since I need to send multiple surround stems to an array of hardware outputs to get the audio over to the ProTools print rig in real time.

A year ago I had Clemens and Jan-Hikkert from the Logic team over at my place to show them why this was such a big problem, and why my output sub-master matrix looks the same today as it did 15 years ago. While it took a few minutes to demonstrate and describe how I route multiple surround stems over to the separate ProTools print rig, they understood immediately and completely - and then they spoke to each other in German for about 30 seconds and proposed a way to solve the issue by adding a new feature to Logic, without incredibly simple and minor changes to the user interface.

Basically, Clemens said, "What if the Project Settings > Audio > I/O Settings dialog had a tabbed interface, with the ability to define the hardware outputs for Surround Busses A through Z? Then, in the pop-up when setting the output for any Audio Object, where you now have a single choice for Surround, you would have Surround A through Surround Z, whose actual hardware outputs are defined in the Preferences. Would 26 Surround Outputs be enough?"

My jaw dropped and I had to resist the urge to hug them both.

At first I imagined we'd have to deal with a crazy grid interface like the i/o settings pages in ProTools, which is very 1990's and makes my head spin. But, as usual, Clemens had a simple, flexible solution that is very "Apple-like" in its simplicity.

I have no idea if or when this might get added to Logic, but it's clear that the team understands the issue and has already figured out a graceful way to implement it with minimal changes to the user experience. Those guys are just top shelf.

Clemens also told me that, other than this issue with the output configurations, all audio pathways inside Logic are capable of n-channel widths with no restrictions - so it's not like they need to rewrite the audio engine or whatever.

My fingers are starting to hurt from keeping them crossed since that meeting.
 
Since Logic's "surround" capability is only useful when outputting a single composite surround mix from Logic, you can't use the nifty little surround panners if you need to output multiple stems at once to multiple arrays of hardware outputs. This is a really serious problem for me, since I need to send multiple surround stems to an array of hardware outputs to get the audio over to the ProTools print rig in real time.

A year ago I had Clemens and Jan-Hikkert from the Logic team over at my place to show them why this was such a big problem, and why my output sub-master matrix looks the same today as it did 15 years ago. While it took a few minutes to demonstrate and describe how I route multiple surround stems over to the separate ProTools print rig, they understood immediately and completely - and then they spoke to each other in German for about 30 seconds and proposed a way to solve the issue by adding a new feature to Logic, without incredibly simple and minor changes to the user interface.

Basically, Clemens said, "What if the Project Settings > Audio > I/O Settings dialog had a tabbed interface, with the ability to define the hardware outputs for Surround Busses A through Z? Then, in the pop-up when setting the output for any Audio Object, where you now have a single choice for Surround, you would have Surround A through Surround Z, whose actual hardware outputs are defined in the Preferences. Would 26 Surround Outputs be enough?"

My jaw dropped and I had to resist the urge to hug them both.

At first I imagined we'd have to deal with a crazy grid interface like the i/o settings pages in ProTools, which is very 1990's and makes my head spin. But, as usual, Clemens had a simple, flexible solution that is very "Apple-like" in its simplicity.

I have no idea if or when this might get added to Logic, but it's clear that the team understands the issue and has already figured out a graceful way to implement it with minimal changes to the user experience. Those guys are just top shelf.

Clemens also told me that, other than this issue with the output configurations, all audio pathways inside Logic are capable of n-channel widths with no restrictions - so it's not like they need to rewrite the audio engine or whatever.

My fingers are starting to hurt from keeping them crossed since that meeting.

That would be awesome indeed! Wonder how they will deal with parent/child bussing scenarios, like when you want to send something to output surround ‘D’ but only to certain channels...like subsets quad, LCR or rears only... Protools does this elegantly imho
 
That would be awesome indeed! Wonder how they will deal with parent/child bussing scenarios, like when you want to send something to output surround ‘D’ but only to certain channels...like subsets quad, LCR or rears only... Protools does this elegantly imho

Yeah, that's one aspect of PT that is just totally handled. It's ugly, but it works and there's no scenario it can't deal with.

I would imagine that in Logic one way to do it would be to have the surround busses A-Z appear as input choices for Aux Objects, and if that Aux has fewer output channels than the input source... well, I don't know what happens then. Then things get complex. I could do without the whole parent/child thing in Logic, since I'm printing to PT anyway I can just do it over there.
 
I can't thank everyone enough for this post. So much good advice. I have been a composer for over (ugh ? ) years , but relatively new to soundtrack and 5.1 mixing. Quick question, how to you handle stems from Logic Pro X to Pro Tools since timecode is not on Logic Wav files. ? I know its been discussed on other threads but I was unable to locate them. Thx for any help.
 
I can't thank everyone enough for this post. So much good advice. I have been a composer for over (ugh ? ) years , but relatively new to soundtrack and 5.1 mixing. Quick question, how to you handle stems from Logic Pro X to Pro Tools since timecode is not on Logic Wav files. ? I know its been discussed on other threads but I was unable to locate them. Thx for any help.

Well, I use two systems, side by side. Logic spits out 48 channels of audio via MADI and Pro Tools (on a separate computer) records that via the Avid MADI interface. Timecode comes out of the SyncHD peripheral on the Pro Tools machine and goes into the Unitor-8 on the Logic machine. Easy peasy.

But if you're talking about doing bounces within Logic and then importing those files into Pro Tools, you can do a couple of things:

- some folks like to embed the timecode start point into the file name of each and every audio file. This is pretty foolproof but makes for a bit of a mess when viewing files in the finder. But it is foolproof.

- what I do, even with the files recorded by PT (which do have time stamps embedded) is to put all of the files for a given cue into a folder, and then embed the timecode startpoint into the FOLDER name, but not into all of the file names. So each cue will have a folder named something like "SAW4-2m14v2=02.08.22.14" and inside that folder is a bunch of files named things like "SAW4-2m14v2-LEG TRAP-DRMstem.L" etc., where the production name is "SAW4", it's version 2 of the cue "2m14", the cue title is "Leg Trap", and that file is the left channel of the drum stem. When I do it this way, I can right-click the folder to make it a zip file and then send that folder to my music editor. When he unzips it, he gets a folder with the time stamp in the folder name, AND he still has the original zip which has that info in its name as well, in case he moves the files out of the original folder and deletes the empty folder. I also am frequently sending updated versions of cues etc. and this way even when I send just one cue as a folder full of wav files, instead of as a whole Pro Tools session, the info is always there.
 
Well, I use two systems, side by side. Logic spits out 48 channels of audio via MADI and Pro Tools (on a separate computer) records that via the Avid MADI interface. Timecode comes out of the SyncHD peripheral on the Pro Tools machine and goes into the Unitor-8 on the Logic machine. Easy peasy.

But if you're talking about doing bounces within Logic and then importing those files into Pro Tools, you can do a couple of things:

- some folks like to embed the timecode start point into the file name of each and every audio file. This is pretty foolproof but makes for a bit of a mess when viewing files in the finder. But it is foolproof.

- what I do, even with the files recorded by PT (which do have time stamps embedded) is to put all of the files for a given cue into a folder, and then embed the timecode startpoint into the FOLDER name, but not into all of the file names. So each cue will have a folder named something like "SAW4-2m14v2=02.08.22.14" and inside that folder is a bunch of files named things like "SAW4-2m14v2-LEG TRAP-DRMstem.L" etc., where the production name is "SAW4", it's version 2 of the cue "2m14", the cue title is "Leg Trap", and that file is the left channel of the drum stem. When I do it this way, I can right-click the folder to make it a zip file and then send that folder to my music editor. When he unzips it, he gets a folder with the time stamp in the folder name, AND he still has the original zip which has that info in its name as well, in case he moves the files out of the original folder and deletes the empty folder. I also am frequently sending updated versions of cues etc. and this way even when I send just one cue as a folder full of wav files, instead of as a whole Pro Tools session, the info is always there.
You guessed correctly, that I was referring to bouncing for pro tools editor. I like your folder approach, to put timecode versus every file name. Again, I greatly appreciate your willingness to share. Your advice is greatly appreciated.
 
Top Bottom