Cinemachine
aka Peter
The video with the LA creatives was good. Hendrick talks about the potential of a DAW that can see ahead.
Instrument modelling discussing was amazing too and a great way to abolish the huge CPU, RAM and ROM load of of sample voices. It would be amazing to see AI technology learn as it has done in our art sector.
For instance if you look at VFX, we've been recreating faces for ages to replicate actors in 3D. Taking textures, scans, etc. Parts of the subject and then bring it together to get something close, but hits the uncanny valley. The same as we do with sampling. Its very close, just not there yet.
But along comes Deepfake, where loads of images and videos of a face have been analysed and used to give an amazing representation in its primary release. It goes in a new direction, and by-passes the complexity of recreating materials, muscles, motion, lighting, etc.
The same concept could be applied to an instrument. If the AI could learn the instrument fed samples and performances of the instruments or sections it could do a much better job than us, manually chopping up and blending samples. I know it sounds difficult but Adobe previewed a scope of impressive audio tech years ago. Where it could recreate a person's speech from a sample of someone's voice. AI voices are also getting there. These patterns we speak in, the tones, etc, it's the same as an instrument, so we're not a million miles away from the concept of AI being feasible.
Not saying OT are doing this, just the video was very interesting and sparked thoughts.
Instrument modelling discussing was amazing too and a great way to abolish the huge CPU, RAM and ROM load of of sample voices. It would be amazing to see AI technology learn as it has done in our art sector.
For instance if you look at VFX, we've been recreating faces for ages to replicate actors in 3D. Taking textures, scans, etc. Parts of the subject and then bring it together to get something close, but hits the uncanny valley. The same as we do with sampling. Its very close, just not there yet.
But along comes Deepfake, where loads of images and videos of a face have been analysed and used to give an amazing representation in its primary release. It goes in a new direction, and by-passes the complexity of recreating materials, muscles, motion, lighting, etc.
The same concept could be applied to an instrument. If the AI could learn the instrument fed samples and performances of the instruments or sections it could do a much better job than us, manually chopping up and blending samples. I know it sounds difficult but Adobe previewed a scope of impressive audio tech years ago. Where it could recreate a person's speech from a sample of someone's voice. AI voices are also getting there. These patterns we speak in, the tones, etc, it's the same as an instrument, so we're not a million miles away from the concept of AI being feasible.
Not saying OT are doing this, just the video was very interesting and sparked thoughts.