What's new

Negative Track Delay Database / Spreadsheet

I recall reading a post on VIC once upon a time wherein someone stated that VSL libraries do not require negative delay -- am I misremembering or is that correct?
Any library is going to have delay, other than a few percussion or piano libraries. Especially legato articulations.
 
However, any software maker can potentially compress or chop off the leading attack of samples in order to eliminate this delay. Kirk Hunter explained to me once that he specifically did not do that because he wanted the full natural attack transient of a bow or finger hitting the string to be replicated....leaving it up to us to make sure we strike the start of that event ahead of the grid appropriately. And many other libraries are the same way.

I do not know why this idea is being perpetuated that VSL libraries are somehow above such things. VSL has not made any official statement or published any numbers as far as I am aware and I am extremely doubtful that they would have compressed or chopped off the leading attack of their samples in order to avoid it either. That is what is usually done for something like say a PCM based keyboard with string sounds and no tolerance whatsoever for latency due to this issue.

I have personally not had a chance yet to really try to go through my VSL libraries and test these things or share some specific numbers in this regard.

I do think some libraries take this issue to a much bigger extreme, CSS for example has ridiculous amounts of latency in some cases that is not attributable to the natural instrument attack transient. its quite possible that VSL has minimized it to such an extent that people just aren't noticing it. VSL Synchron and ViPro players also introduce actual humanization to pretty much every patch, which adds further amounts of delay in an inconsistent matter...which is what we want...but also makes it hard to pinpoint an exact amount of latency that you would want to correct for. But I do think this topic with regards to VSL libraries is a worthwhile discussion at some point, I just haven't had time to dig into it.
 
I do think some libraries take this issue to a much bigger extreme, CSS for example has ridiculous amounts of latency in some cases that is not attributable to the natural instrument attack transient. its quite possible that VSL has minimized it to such an extent that people just aren't noticing it. VSL Synchron and ViPro players also introduce actual humanization to pretty much every patch, which adds further amounts of delay in an inconsistent matter...which is what we want...but also makes it hard to pinpoint an exact amount of latency that you would want to correct for. But I do think this topic with regards to VSL libraries is a worthwhile discussion at some point, I just haven't had time to dig into it.
CSS was hard to wrap my head around until I figured out the workflow. For me, I almost always use the “fast” legato speed of 100ms. All you need to do is set the velocity of the notes to above 65 which you can do using a keyboard shortcut. Set the track delay to -100.

If I want to write slower lines I use the 250ms delay setting and set the velocity of notes to below 64 using a key command. Those go on a separate track with a delay setting of -250.

I never use the “Advanced“ mode and the slow delay of 320ms, I don’t find it necessary.

What you don’t want to do is try to play it in and then sit there and move your notes around, that’s ridiculously slow. Just quantize and set them to the same velocity.

The editing of the start times of CSS is extremely consistent and dependable, so this actually makes it an easier library to work with in my experience, as opposed to other libraries that have not been edited with consistent delays from note to note.
 
I was not meaning to infer any judgement about CSS being good or bad, just noting that the latency is due to scripting, rather then the raw natural instrument attack transient. As you pointed out, this is related to the legatos usually, and CSS is reknown for having "fancy" legatos, for lack of a better word. So naturally it has quite a lot of latency. Other libraries may have more "subtle" legatos, which don't require as much latency to achieve. I can't speak anything about which libraries might be inconsistent, but I agree, that would be a huge problem, other then "humanization" which is a good thing..and you simply count the average of humanized output to be somewhere in the middle of its range..and correct by that amount.
 
I was not meaning to infer any judgement about CSS being good or bad, just noting that the latency is due to scripting, rather then the raw natural instrument attack transient. As you pointed out, this is related to the legatos usually, and CSS is reknown for having "fancy" legatos, for lack of a better word. So naturally it has quite a lot of latency. Other libraries may have more "subtle" legatos, which don't require as much latency to achieve. I can't speak anything about which libraries might be inconsistent, but I agree, that would be a huge problem, other then "humanization" which is a good thing..and you simply count the average of humanized output to be somewhere in the middle of its range..and correct by that amount.
Great points!

The humanization factor I have mixed feelings about - I see your point about adding realism, although I would need to be able to turn that off. My perspective on this recently changed when I worked on a score that was my first time having it recorded by live musicians. The mock-up needs to be blended in with the live recording, so the mock-up needs to be on the grid or else it would sound messy. If I ever want things to be “loose“ then I can easily set that in my DAW with the quantize settings, and I don’t want the sample developers making those decisions for me. But having the option is nice, for example like in LA Modern Percussion where there’s a “tightness“ control. I imagine what the Synchron player has something like that.
 
Thanks David!
Do you know of a similar effort to centralize in a database or spreadsheet the differences of loudness of instruments/articulations?
Anna-Kathrin Dern for example, configures every track with a MIDI volume of 90 by default, a number that's changed when a library/instrument/articulation is noticably louder or quieter.
That way it's possible to mix and match library tracks without wasting time.
 
Thanks David!
Do you know of a similar effort to centralize in a database or spreadsheet the differences of loudness of instruments/articulations?
Anna-Kathrin Dern for example, configures every track with a MIDI volume of 90 by default, a number that's changed when a library/instrument/articulation is noticably louder or quieter.
That way it's possible to mix and match library tracks without wasting time.
I don’t think there’s such a database, but this is something probably best done with your ears anyway. There’s too many variables involved in that one.

Anne’s method of using 90 for CC7 as a starting point is a great way to go. Hans also recommends using 90 as a starting point, so there you go. I recently built a VEPro template so I like having that extra CC7 headroom from 90-127 to be able to boost an instrument that’s too quiet. I don’t use the Cubase midi track volume, I put CC7 at 90 in the Midi flags on all my tracks (in my template), and can adjust up or down in the flags. This is because Cubase won’t actually send the CC7 volume from the midi track volume unless you actually move or automate that fader, whereas when you play the MIDI flag, it will always reset that track volume when you open a new project.
 
Great points!

The humanization factor I have mixed feelings about - I see your point about adding realism, although I would need to be able to turn that off. My perspective on this recently changed when I worked on a score that was my first time having it recorded by live musicians. The mock-up needs to be blended in with the live recording, so the mock-up needs to be on the grid or else it would sound messy. If I ever want things to be “loose“ then I can easily set that in my DAW with the quantize settings, and I don’t want the sample developers making those decisions for me. But having the option is nice, for example like in LA Modern Percussion where there’s a “tightness“ control. I imagine what the Synchron player has something like that.
Yeah,
I think Anna Katherine Dern kind of summed it up well here: (14:38)



If it was live musicians, and timing is such that you'd get them to do another take, well then that level of 'humanity' is probably excessive.
 
Yeah,
I think Anna Katherine Dern kind of summed it up well here: (14:38)



If it was live musicians, and timing is such that you'd get them to do another take, well then that level of 'humanity' is probably excessive.

Hehe, yep, and in fact the movie I mentioned was actually Anne’s. I had the privilege of writing some additional music for her, and I learned a ton. The methods she goes through in her channel really are ideal for getting high quality and still being efficient.
 
Hehe, yep, and in fact the movie I mentioned was actually Anne’s. I had the privilege of writing some additional music for her, and I learned a ton. The methods she goes through in her channel really are ideal for getting high quality and still being efficient.
That's brilliant.
I have a ton of respect for both Anne and yourself.
Keep up the great work 👍

Thanks,

JJ
 
Maybe this has been asked before but why aren't developers chimming in with this information? Obviouly they should know these delay offsets as they developed these libraies and i am sure they use them also on a daily basis. Have we asked them? Also some develpers are very open with their delay offsets such as Cinematic Studio Series, granted they have to considering its major factor to making these libraies sound as good as possible, but surley this should be the same reaspning for most developers or am i wrong here?
 
Calculating the offset is not an exact science. You have to do it by ear which is time consuming and somewhat subjective.

I think cinematic studios products have particularly large offsets along with Theo gorgeous legatos so they have been more under the spotlight and have gotten involved which is kudos to them. It may be a while if ever that it becomes standard practice to do so.
 
measure what though? I don't think you are understanding the issue. the delay is caused by a slow attack time, not by null audio
 
Actually your looking for the transient of when the sample hits and that can be measured by bouncing the midi in place to audio then zoom into the waveform and measuring where the transient starts from start of where it should have been
 
Top Bottom