I feel pro-AI with regard to the musical arena related to the realism, playability and ease of production out of the box for virtual instruments, including the special challenge with exposed solo instruments or small ensembles like string quartets. I think these are matters that many of us talk about on this site.
I have long dreamed about an AI application in specific support of that goal. I am not that interested so much in AI writing the music as I am in AI helping in the performance of it, in producing best in class articulations and note-to-note transitions that are controllable more like a composer would talk with an instrumentalist who has the skillset to respond to such things.
For instance, as knowing how much portamento does what to a phrase, just as an example. Or doing vibrato in all its complexity and human-ness. Or any of the other myriad of factors to add up to more musicality and realism. Like being able to produce what an actual vibrato waveform produced by a highly skilled instrumentalist would be that makes it more realistic and musically complex than a cut-and-splice set of cross-faded recorded samples or a physically modeled instrument that doesn't quite generate the complex waveform that you find in skilled recorded instrumentalists.
Of course, physical models might utilize the AI by incorporating more components and component complexity in timbre generation. And AI might also help sample libraries makers and users in many of the things that are hand-controlled now.
But I'm really hoping for breakthroughs the use of AI to fundamentally up level the nature of what a virtual instrument is and how to control it, to take the best of both approaches (recorded samples and algorithmic synthesis) with instrumental timbre as well as putting those instruments into a gorgeous complex and realistic space.
The application SynthV sounds like a real shift in the success of applied AI for singing. This shift I would imagine in part rests on the back of the whole arena of speech synthesis, and that has historically had a high commercial value relative to timbre synthesis, so it has historically had much more R&D resources thrown at it.
Nonetheless, I wonder about what is happening in the realm of instrumental timbre generation with AI... I hope there is something happening out there. I just have not heard of it. I don't really see that much in that direction with sample libs or physical models. I have a friend who was one of the developers of a very successful speech recognition/generation system that probably half of the population uses all day long, and I know he has more recently moved into the arena of music AI, but as far as I know, he is working with real instrumentalists and not virtual instruments.
Anyone know of anything happening with AI and virtual instruments?