guerrax
New Member
I meant concerned by a pre delay, sorryThanks! I’m not sure what you mean though by what’s affected.
Thank for you answer !
I meant concerned by a pre delay, sorryThanks! I’m not sure what you mean though by what’s affected.
Any library is going to have delay, other than a few percussion or piano libraries. Especially legato articulations.I recall reading a post on VIC once upon a time wherein someone stated that VSL libraries do not require negative delay -- am I misremembering or is that correct?
CSS was hard to wrap my head around until I figured out the workflow. For me, I almost always use the “fast” legato speed of 100ms. All you need to do is set the velocity of the notes to above 65 which you can do using a keyboard shortcut. Set the track delay to -100.I do think some libraries take this issue to a much bigger extreme, CSS for example has ridiculous amounts of latency in some cases that is not attributable to the natural instrument attack transient. its quite possible that VSL has minimized it to such an extent that people just aren't noticing it. VSL Synchron and ViPro players also introduce actual humanization to pretty much every patch, which adds further amounts of delay in an inconsistent matter...which is what we want...but also makes it hard to pinpoint an exact amount of latency that you would want to correct for. But I do think this topic with regards to VSL libraries is a worthwhile discussion at some point, I just haven't had time to dig into it.
Great points!I was not meaning to infer any judgement about CSS being good or bad, just noting that the latency is due to scripting, rather then the raw natural instrument attack transient. As you pointed out, this is related to the legatos usually, and CSS is reknown for having "fancy" legatos, for lack of a better word. So naturally it has quite a lot of latency. Other libraries may have more "subtle" legatos, which don't require as much latency to achieve. I can't speak anything about which libraries might be inconsistent, but I agree, that would be a huge problem, other then "humanization" which is a good thing..and you simply count the average of humanized output to be somewhere in the middle of its range..and correct by that amount.
Here’s the link.Can you please post a link, I somehow can't open or save the spreadsheet.
Thanks David!
I don’t think there’s such a database, but this is something probably best done with your ears anyway. There’s too many variables involved in that one.Thanks David!
Do you know of a similar effort to centralize in a database or spreadsheet the differences of loudness of instruments/articulations?
Anna-Kathrin Dern for example, configures every track with a MIDI volume of 90 by default, a number that's changed when a library/instrument/articulation is noticably louder or quieter.
That way it's possible to mix and match library tracks without wasting time.
Yeah,Great points!
The humanization factor I have mixed feelings about - I see your point about adding realism, although I would need to be able to turn that off. My perspective on this recently changed when I worked on a score that was my first time having it recorded by live musicians. The mock-up needs to be blended in with the live recording, so the mock-up needs to be on the grid or else it would sound messy. If I ever want things to be “loose“ then I can easily set that in my DAW with the quantize settings, and I don’t want the sample developers making those decisions for me. But having the option is nice, for example like in LA Modern Percussion where there’s a “tightness“ control. I imagine what the Synchron player has something like that.
Yeah,
I think Anna Katherine Dern kind of summed it up well here: (14:38)
If it was live musicians, and timing is such that you'd get them to do another take, well then that level of 'humanity' is probably excessive.
That's brilliant.Hehe, yep, and in fact the movie I mentioned was actually Anne’s. I had the privilege of writing some additional music for her, and I learned a ton. The methods she goes through in her channel really are ideal for getting high quality and still being efficient.
Actually if you convert the track to audio it can be measured from most DAW's and is very accurateCalculating the offset is not an exact science. You have to do it by ear which is time consuming and somewhat subjective