What's new

Orchestral Tools Premiere Event—Thursday, December 17 (+ a massive announcement…)

The video with the LA creatives was good. Hendrick talks about the potential of a DAW that can see ahead.

Instrument modelling discussing was amazing too and a great way to abolish the huge CPU, RAM and ROM load of of sample voices. It would be amazing to see AI technology learn as it has done in our art sector.

For instance if you look at VFX, we've been recreating faces for ages to replicate actors in 3D. Taking textures, scans, etc. Parts of the subject and then bring it together to get something close, but hits the uncanny valley. The same as we do with sampling. Its very close, just not there yet.

But along comes Deepfake, where loads of images and videos of a face have been analysed and used to give an amazing representation in its primary release. It goes in a new direction, and by-passes the complexity of recreating materials, muscles, motion, lighting, etc.

The same concept could be applied to an instrument. If the AI could learn the instrument fed samples and performances of the instruments or sections it could do a much better job than us, manually chopping up and blending samples. I know it sounds difficult but Adobe previewed a scope of impressive audio tech years ago. Where it could recreate a person's speech from a sample of someone's voice. AI voices are also getting there. These patterns we speak in, the tones, etc, it's the same as an instrument, so we're not a million miles away from the concept of AI being feasible.

Not saying OT are doing this, just the video was very interesting and sparked thoughts.
 
The video with the LA creatives was good. Hendrick talks about the potential of a DAW that can see ahead.

Instrument modelling discussing was amazing too and a great way to abolish the huge CPU, RAM and ROM load of of sample voices. It would be amazing to see AI technology learn as it has done in our art sector.

For instance if you look at VFX, we've been recreating faces for ages to replicate actors in 3D. Taking textures, scans, etc. Parts of the subject and then bring it together to get something close, but hits the uncanny valley. The same as we do with sampling. Its very close, just not there yet.

But along comes Deepfake, where loads of images and videos of a face have been analysed and used to give an amazing representation in its primary release. It goes in a new direction, and by-passes the complexity of recreating materials, muscles, motion, lighting, etc.

The same concept could be applied to an instrument. If the AI could learn the instrument fed samples and performances of the instruments or sections it could do a much better job than us, manually chopping up and blending samples. I know it sounds difficult but Adobe previewed a scope of impressive audio tech years ago. Where it could recreate a person's speech from a sample of someone's voice. AI voices are also getting there. These patterns we speak in, the tones, etc, it's the same as an instrument, so we're not a million miles away from the concept of AI being feasible.

Not saying OT are doing this, just the video was very interesting and sparked thoughts.
Can you post a link to the video, please? Interesting stuff and I seem to have missed it.
 
The video with the LA creatives was good. Hendrick talks about the potential of a DAW that can see ahead.

Instrument modelling discussing was amazing too and a great way to abolish the huge CPU, RAM and ROM load of of sample voices. It would be amazing to see AI technology learn as it has done in our art sector.

I agree with this. To me, the discussion about articulations was especially revealing and I admire OT's view that it's not good for composers when we're forced because of current limitations to live with nothing but legato and spiccato.

If I'm not wrong, I think OT did some pioneering work with Adaptive Legato back in the day and created the ability in the Berlin Series to turn any long interval into a legato.

So taking all of this together, I would expect some kind of new technology, possibly some new scripting engine inside SINE, like Capsule was for Adaptive Legato...
 
I think the logical trajectory for any developer of digital products at a certain point becomes to not just create and sell your own (which is a limited business model), but to become a platform for reselling yours and others (an unlimited business model).

Going from creator of products, to a platform for products, basically breaks the barrier for how big of a vision you can achieve, how much of the market you can affect, and how much revenue you can eventually generate. What SINE is so far, definitely feels like a step towards this.

Being a store, where people can purchase the content they need, from directly within the context of where they do their work, is a huge advantage. And from that perspective OT has the lead by far since they are already doing exactly this, albeit on a smaller scale with their own content.

We'll see what happens. But with a store inside a DAW already established, and OT already branching out and doing more and more collaborations, I would be very surprised if it was not at all intended to grow into a platform for as much sampled content as possible.

You don't build a store if you don't plan to sell things, right? :)
 
Last edited:
And with Kontakt (which has for many years been the de facto platform) feeling older and more left behind for every day, this slot is basically already begging to be taken! The difference between OT and the other devs that also abandoned kontakt to make their own players, is that OT made their player not just a player but a store. That don't just plays stuff but sells stuff. That is already in your DAW. If they are not looking to fill the shoes of becoming a new major library platform, I don't know what they are doing. :D
 

they talk about a DAW tailored for composers.
A new DAW :shocked:
I thought about that too. But it would be very surprising and a big leap, because SINE isn't established from my perspective. like take two steps in one, too big and will fail ;) but who knows what investors and devs they could have attracted to join their vision.

P.S. Special shoutout to @Manaberry for that brilliant Jurassic Park video...
where can I find @Manaberry Jurassic Park video?
 
Top Bottom