What's new

"Generative AI is the greatest risk to the human creative class that has ever existed"

Sora was trained on copyrighted works.

They’re never going to let a court rule that they have to junk all of this. Setting a precedent with music applies to everything else as well.
At this point, what the courts will decide is still an open question.
 
No, but in a normal industry/sector not plagued by heavy monopoly and too much power, it would have been the immediate thing to do.
You create a product by stealing, you cannot sell the product, make a new one not based on stolen data.
They IDEALLY should throw everything out and start anew.

For some reason tech companies are above the law. And if they get punished they are made to pay fine that are like 0.000000000000001% of what they do in a minute.
The issue is that it's not cut and dry that it is stealing. You can't say it's copyright infringement, because the law can't be applied this way. Copyright law deals with the finished product, where your issue demands we say that the finished product makes no difference at all. You only started caring when it got good, and as they bring out more art models, when it starts getting good at someone's particular area. Where were you years ago when they came out with DallE? Where were you all throigh Midjoruney 1-6? Where were you before Suno and Udio, when AI music was crap? You only cared about the output, not the process of training. Now the output is good, now you care about the training.

The law would have to come up with a new precedent that deals with copyright in a different way, where it doesn't care about the output at all.

Even the "it's theft" opinions aren't all the same. Getty images sued Stable Diffusion for "stealing" their work, but then Getty made their own AI model trained on their catalogue, which means they've now "stolen". But Getty owns the rights to that content, that's different, they will argue. But I'm sure you should agree, it really makes no difference to the content creators and artists and photographers. Universal Music is also suing AI companies. Does anyone really think they're not excited about making their own AI to screw the musicians, composers and producers that created the content Universal own the same way Getty did?
 
I think I read that it’s called Jukebox.
Jukebox? I don't get it

EDIT: Oh wait I get it. Yes Jukebox is an old model back in 2020 that everyone forgot about because it was so hard to run and took so long.

But yes i think they have this, but way way better now.
 
Last edited:
At this point, what the courts will decide is still an open question.
The courts won't decide that Open AI and every other Ai company having to junk all their models, including all the GPT series'.

I'm sorry but this cannot work out the way you want it to.

Because even if they did, which they won't, they'd have to also ban people from using their own reference/input material.

And even if you even managed to get that, which you won't, Open Source models mean that anyone can train their own models on absolutely anything they want, and easily share them. Stable Diffusion's release version can use entirely copyright free training, and it wouldn't make a single bit of difference. People already don't care about the training for the base model, because they know the power is in custom models.

You'd not only have to ban AI, you'd have to make it illegal to develop it for Open Source, and also go after anyone who's downloaded any models. etc. AND also force everyone who has ever used Ai on anything to take it down, or get sued for infringement.
 
The courts won't decide that Open AI and every other Ai company having to junk all their models, including all the GPT series'.
That's a blanket prediction.

Because even if they did, which they won't, they'd have to also ban people from using their own reference/input material.
It's unclear how you reached that conclusion, but: no.

And even if you even managed to get that, which you won't, Open Source models mean that anyone can train their own models on absolutely anything they want, and easily share them.
People do illegal things all the time. But that doesn't mean the behavior is tolerated.

For example, plenty of people visit Warez sites to get cracked VSTis. Should we then not bother purchasing legal copies of things?

Stable Diffusion's release version can use entirely copyright free training, and it wouldn't make a single bit of difference. People already don't care about the training for the base model, because they know the power is in custom models.
It's been observed that limiting the training sets to copyright-free materials makes quite a bit of difference on the quality.

You'd not only have to ban AI, you'd have to make it illegal to develop it for Open Source, and also go after anyone who's downloaded any models. etc. AND also force everyone who has ever used Ai on anything to take it down, or get sued for infringement.
You wouldn't "ban AI", you'd ban works that are the result of illegal use of AI.

In counterpoint to your arguments, courts have ruled that copyright infringement is illegal, and companies regularly force people who has made illegal copies to take it down, or get sued for infringement. Case in point: YouTube.
 
I don't know about you but I see quite a difference between video generation and music generation.

With video generation, you become a movie director. You still are an artist.

With music generation, you are, well, not much right now. Maybe when the tools are more customizable , you will be called a director too, a music director.

Not going to lie, I find video generation exciting.

And that's the problem. Each artist finds the AI generation of a related art exciting. Music artists found image generation cool to create covers. Image artists find music generation cool cause they can get music that fit their images.
And both image and music artists salivate at the idea of video generation.

Artists are not united and they don't care about the other arts.

It's only when AI will have taken over the entertainment business that they will unite but it will be too late.

That's my take , as a non professional anyway.
 
That's a blanket prediction.


It's unclear how you reached that conclusion, but: no.


People do illegal things all the time. But that doesn't mean the behavior is tolerated.

For example, plenty of people visit Warez sites to get cracked VSTis. Should we then not bother purchasing legal copies of things?


It's been observed that limiting the training sets to copyright-free materials makes quite a bit of difference on the quality.


You wouldn't "ban AI", you'd ban works that are the result of illegal use of AI.

In counterpoint to your arguments, courts have ruled that copyright infringement is illegal, and companies regularly force people who has made illegal copies to take it down, or get sued for infringement. Case in point: YouTube.
I wrote a long main post about this topic here.

If you're going to reply, show me you've actually read it.

 
I don't know about you but I see quite a difference between video generation and music generation.

With video generation, you become a movie director. You still are an artist.
I suggest focusing on the similarities: every element in video generation came from someone else's video. Is there a bird in the video? Someone took a video of birds. Is there someone on a swing? Someone took a video of birds. And so on.

If some element in the video is unique enough, it's even possible to identify the video from which it came.

All these things took someone time and effort. All this effort is being harvested by AI.
 
I don't know about you but I see quite a difference between video generation and music generation.

With video generation, you become a movie director. You still are an artist.

With music generation, you are, well, not much right now. Maybe when the tools are more customizable , you will be called a director too, a music director.

Not going to lie, I find video generation exciting.

And that's the problem. Each artist finds the AI generation of a related art exciting. Music artists found image generation cool to create covers. Image artists find music generation cool cause they can get music that fit their images.
And both image and music artists salivate at the idea of video generation.

Artists are not united and they don't care about the other arts.

It's only when AI will have taken over the entertainment business that they will unite but it will be too late.

That's my take , as a non professional anyway.
The new artform that will come out of Music Ai isn't clear, and how people will enjoy music in the future (or art in general) in the future, isn't clear.

But people won't stop making music. Look how much people like using Udio. You've lowered the barrier of entry and now more people are interested in "creating". No what will happen is that the people who are musicians, composers and producers and already very talented will use the AI and their own abilities to produce something totally new we haven't seen before.

Consider video games. AI will generate videogames as good and better than AAA titles today far sooner than it potentially gets to the point where it's so good literally all human input is irrelevant. Who knows how long that will take, or what that world looks like if/when it does, so it's not worth thinking about. What will happen is that a game producer will use the AI to make games that would be impossible to produce normally. An Indie developer will make AAA quality games, while the AAA game studio won't go out of business... no, the AAA studio will have the resources to make something EXPONENTIALY more than what we consider a videogame today. And for examples of that, I'll have to refer you to science fiction.
 
The new artform that will come out of Music Ai isn't clear, and how people will enjoy music in the future (or art in general) in the future, isn't clear.

But people won't stop making music. Look how much people like using Udio. You've lowered the barrier of entry and now more people are interested in "creating". No what will happen is that the people who are musicians, composers and producers and already very talented will use the AI and their own abilities to produce something totally new we haven't seen before.

Consider video games. AI will generate videogames as good and better than AAA titles today far sooner than it potentially gets to the point where it's so good literally all human input is irrelevant. Who knows how long that will take, or what that world looks like if/when it does, so it's not worth thinking about. What will happen is that a game producer will use the AI to make games that would be impossible to produce normally. An Indie developer will make AAA quality games, while the AAA game studio won't go out of business... no, the AAA studio will have the resources to make something EXPONENTIALY more than what we consider a videogame today. And for examples of that, I'll have to refer you to science fiction.
Like I said, we are all egoists , it's our nature. We, as people who like the process of creating music dispise music AI but love all the other generative AI possibilities.

I won't lie, I find the possibility of creating my own movie or own video game with just prompts extremely exciting even though I realize that many people will lose their jobs.
 
I wrote a long main post about this topic here.
We've both written quite a bit. This thread is 54 pages long. o_O
If you're going you expect me to reply, show me you've actually read it.
Fixed that for you. :rolleyes:

The faster people can get from denial, anger, bargaining all the way to acceptance the better they'll be.
See? You also wrote:

I don't think you could have possibly read my post if you're asking this.
Here's another possibility: reasonable people can disagree, even after carefully listening to the other person.

Personally, I doubt there will be a single ruling that clearly defines how we'll use AI. I think it's much more likely that various courts will offer different rulings, and there will be a patchwork of competing rulings that will eventually be unified. Add competing legislation (like the ELVIS Act), and things will likely remain messy.

In the meantime, companies will try to convince their governments that not only is AI somehow Fair Use, but that any ruling against them will cause their country to fall behind in the AI marketplace.
 
That's a blanket prediction.
I'm going to assume you read my post I directed you to on the court cases.

Ed said:
Because even if they did, which they won't, they'd have to also ban people from using their own reference/input material.
It's unclear how you reached that conclusion, but: no.
If you don't understand how I reached this conclusion then you're not very experienced using AI.
Using a reference means I can "steal" any work I want to in a similar way. If that's what you're trying to stop, then you'll have failed.

In counterpoint to your arguments, courts have ruled that copyright infringement is illegal, and companies regularly force people who has made illegal copies to take it down, or get sued for infringement. Case in point: YouTube.

How is that a counterpoint?

When did I deny copyright infringement is illegal? The whole point about the legal issue is that this can't possibly be called copyright infringement in a traditional sense. Or do you actually think AI works by having a large database of material and it just edits it together like a robot Frankenstein Photopshoper?

Like it or not this isn't the same and that's why they need to establish a whole new standard.

Ed said:Ed said:
And even if you even managed to get that, which you won't, Open Source models mean that anyone can train their own models on absolutely anything they want, and easily share them.
People do illegal things all the time. But that doesn't mean the behavior is tolerated.

For example, plenty of people visit Warez sites to get cracked VSTis. Should we then not bother purchasing legal copies of things?


Why even use that example and not a musical one? Because you know its different.
The equivalent to what you said is Napster, which is very clearly copyright infringement.

Copyright infringement doesn't give a single shit about someone trying to get close to someone's style and using them as a "reference".

You can't even copyright a rhythm, or chord sequence.

You can have everyone convinced that Hans Zimmer made some music when he didn't, and in theory even make them think it's some actual track from one of his film scores, so long has it doesn't use the same tune etc etc etc it's protected. You can't copyright a style, and if we could, do you have any idea how much art and music would be breaching copyright? Even if everyone agrees we know something is certainly a rip off, doesn't matter. That's not the criteria. The only reason intent is relevant is after it's shown to sound close enough in very particular ways.


It's been observed that limiting the training sets to copyright-free materials makes quite a bit of difference on the quality.
You really aren't paying attention. Companies you think are on your side are making their own AI's and have no fear in doing so, and are always described as the "ethical" ones. That is even though they're training on their own content creators work, apparently without asking them, and if they're lucky they'll be thrown a few dollars as "bonus" as Adobe put it. These are the people you think have a chance to stop this. They aren't trying to get the outcome you're hoping for.

Even if they succeeded in stopping the main company's, how did that help you? And then the logical outcome is that these big companies like Getty, Shutterstock, Adobe, Universal Music etc will just license their catalogue to OpenAI but still a fraction of the cost of what the artist would need to ever make up for it. Open Source is still there, and the very people that publish and distribute your work made their own AI trained by everything you DIDN'T want trained.

Either way you end up in the same place you feared, just with slightly different wallpaper.

You wouldn't "ban AI", you'd ban works that are the result of illegal use of AI.

You actually think this is practical? Have you ever put a single thought into what this would mean at this point?
 
Last edited:
I'm going to assume you read my post I directed you to on the court cases.
No need to "assume" anything.

I literally quoted from your post to show that I had read it.

If you don't understand how I reached this conclusion then you're not very experienced using AI.
That would be an incorrect conclusion.

Using a reference means I can "steal" any work I want to in a similar way. If that's what you're trying to stop, then you'll have failed.
What I'm advocating is to stop applying laws written for humans to AI.

How is that a counterpoint?
You said enforcement wasn't possible because that you would have to "force everyone who has ever used Ai on anything to take it down, or get sued for infringement."

My "counterpoint" was that, in cases of copyright infringement, companies regularly "force everyone who has ever made illegal copies to take it down, or get sued for infringement."

When did I deny copyright infringement is illegal? The whole point about the legal issue is that this can't possibly be called copyright infringement in a traditional sense.
I agree.

Or do you actually think AI works by having a large database of material and it just edits it together like a robot Frankenstein Photopshoper?
You forgot to include how the magical back-propagation fairies adjust the weights in neural networks.

Of course I don't think that, and there's no need to be rude about it.

Like it or not this isn't the same and that's why they need to establish a whole new standard.
I agree completely.

Why even use that example and not a musical one? Because you know its different.
The equivalent to what you said is Napster, which is very clearly copyright infringement.
My point was about the ability to police laws, not copyright infringement.

Copyright infringement doesn't give a single shit about someone trying to get close to someone's style and using them as a "reference".

You can't even copyright a rhythm, or chord sequence.
I'm well aware of that.

Companies you think are on your side...
I don't think any companies are on my side.

If any meaningful action is to be taken, it'll need to be initiated by a coalition of artists - not the companies that represent them. Companies would sell their catalog to the highest bidder in a heartbeat.

These are the people you think have a chance to stop this. They aren't trying to get the outcome you're hoping for.
That's not what I think, and I didn't write about an outcome I "hoped for" - just the one I expected.

You actually think this is practical? Have you ever put a single thought into what this would mean at this point?
How are insults like this helpful to conversation? :confused:
 
Last edited:
But people won't stop making music. Look how much people like using Udio. You've lowered the barrier of entry and now more people are interested in "creating".
Corporate talk...
The Udio creators are such benefactors to humanity they have "democratized" music.

"using Udio" means writing down a couple of sentences and waiting, like sheep, for the results to come.
It's not an entry. Entry to what? Creating what?

You have entry, not to creating - but to obtatining results, masters, audio files, based on a personalized request to a software.
The software was created by stealing the and using without permission millions of human works.

It's more similar to being an executive producer. You decide the project and making happen. But you don't make artistic decisions. (at least in its current form)


People will probably always continue to strum guitars and sing hopefully - prompting AI music is not "making music".


They are geniuses down at Udio, I have to say, calling those track "your creations".
 
I tried udio. Sometimes it generates very recognizable voice. Interestingly, the most recognizable vocals were when it created very specific kind of music popular on the Balkans. I didn't expect that. It seems they ripped off everything that was available. With globally popular music it was less obvious, but retro songs often reminded me of ABBA and some trance vocals sounded similar to well known singers. To me, this would be equivalent of using pirated vocal libraries.
When it comes to music, sometimes it felt pretty generic, other times not so much - like using complete production from someone else.
The other problem is you can't specify chord progression or melody, at least I haven't found the way. I guess this is the limitation of the technology.
 
After reading through much of this thread, I feel like selling everything, buying a little place way off somewhere down by the sea and sipping a hot drink as I watch the waves roll in.

Not interested in VR headsets, AI, and the whole lot.

I’ll stick with real life. Enjoy your day.
 
After reading through much of this thread, I feel like selling everything, buying a little place way off somewhere down by the sea and sipping a hot drink as I watch the waves roll in.

Not interested in VR headsets, AI, and the whole lot.

I’ll stick with real life. Enjoy your day.
I mean damn boy, if you can afford to do that, why wouldn't you do that anyway? :D
 
Top Bottom