I posted this to another thread but I think it should be it's own post.
TL;DR:
The faster people can get from denial, anger, bargaining all the way to acceptance the better they'll be. There WILL be some artform that will come out of this. Like photography and CGI art came out of those technologies. There were similar concerns about synthesizes, and of course sample libraries. This is on a whole other level, but the point remains. The question is, will you have drowned in the flood as a laggard in denial, or found a way to build a boat to ride the wave as long as possible. But one thing you can count on, the flood is coming, and you can't stop it.
These two articles are a great example of why the courts will never rule that AI companies must only use copyright cleared content in training and stop the great replacement.
In the article, their definition of ethical is not about asking creators for permission, or even paying them. Shutterstock and Getty, mentioned in these articles, made their own AI model of their content, made by the same people that think these companies might win the legal battle for them. But they're the ones that you'll have to fight as well if you want to stop all this copyrighted material to be used for training
Look at what Adobe are quoted as saying in the above article:
So when companies like Midjourney and OpenAI use massive datasets of imagery scraping the internet for everything they can get their hands on, it's "theft" and "unethical". But when Shutterstock, Adobe, and Getty Images do it to their own content creators, that's fine?
But don't worry, if Adobe used your images to train their AI, they'll throw a few dollars at you and call it a "bonus". That will make up for losing your entire industry. We know it can't possibly be more than a few dollars because otherwise they'd have bankrupted themselves.
It's sad that artists (and in-denial composer producers) are thinking they'll be fought for in court, when really these companies have zero interest in fighting for them and are currently (or actively) already betraying them. They care about THEMSELVES being replaced by AI, not you! They cared that someoneELSE "stole" your work for training their AI. Like Adobe, Universal Music will do the same thing. They'll talk about how much this harms artists on the one hand then casually train their own AI on all your work and say they're the ethical ones and expect you to be grateful. Here have $20.
It should be even more insulting because unlike the AI companies, these company's are the ones claiming it's theft and bad for artists, yet they'll go ahead and do the same thing anyway to their creators and act like it's effectively different.
Lets say you have music published with Universal Music. How much would you need if Universal Music made an AI like Udio (only better because it's the future), trained on every track in their collection, and suddenly you find you've been paid a hundred bucks or something for all your work to be used in the training?
The best outcome for these companies is they get those like OpenAI to license their content to them. That's why they're so casual about making their own models, they know the court's legal case isn't going to actually make THEIR models unlawful.
The faster creators give up on this fantasy that they can stop this the better off they'll be. If you worked really hard and were really successful, you might be able to marginally slow down a company or two for maybe 6 months. By the time these cases even reach a legal precedent open source will have already made it impossible to go back, even if you wanted to.
---
There's plenty more reasons why the courts won't decide this the way you want them too, partly because artists and content creators are using arguments that ensure they will. By focusing on the output itself, how good or close it sounds, ensures you have to lose. This is because IF your premise is that the training data is unlawful, the output must be entirely irrelevant. It can't make any difference how good it sounds, or how close it sounds to any traditional sense of copyright infringement. It must be that no matter what it sounds like it's ALL copyright infringement. But they're not!
Strike 1.
They only cared about the AI training data until it came for their industry, and until it was good enough to be a threat. Where were they for the couple of years artists were losing their shit, saying the exact same things about Image Generators? Again only proving further that they evidently only care about the output, IE, how it actually sounds.
Strike 2.
The premise they don't realize they continue to set up here is one where the logical conclusion is that legally these company's must make sure they take sufficient steps to make it impossible for their model to generate something that actually infringes copyright.
Strike 3.
I see none of them mention or account for user uploaded image references or source images or Open Source. All they do is act morally offended and talk with strong words about how it's definitely theft, while insisting legally it's an "open question" and there's still a chance. Where are the lawsuits to ban open source AI? Facebook are currently on a roll releasing open source AI models. I don't see them advocate for any lawsuits against any of it. All the LLM's are trained on uncleared copyrighted work as well.
And these corporations are the biggest richest company's on earth, even excluding companies like Blackrock and Vanguard (which own each other) that basically own all of them and essentially the entire stock market. The "powers" that be want AI to continue, and there's absolutely no way they'll ever restrict development or criminalize it to the degree they'd have to, in order to stop this from happening to artists. All the major AI companies would have to junk all their models, not allow custom image inputs, outlaw Open Source development, and make all AI generations already used to be by definition copyright infringement, and really criminalize having any Open Source models you may already have on your computer.
TL;DR:
The faster people can get from denial, anger, bargaining all the way to acceptance the better they'll be. There WILL be some artform that will come out of this. Like photography and CGI art came out of those technologies. There were similar concerns about synthesizes, and of course sample libraries. This is on a whole other level, but the point remains. The question is, will you have drowned in the flood as a laggard in denial, or found a way to build a boat to ride the wave as long as possible. But one thing you can count on, the flood is coming, and you can't stop it.
So Adobe Firefly AI isn't as squeaky clean as it seemed
Should we really be surprised?
www.creativebloq.com
These two articles are a great example of why the courts will never rule that AI companies must only use copyright cleared content in training and stop the great replacement.
In the article, their definition of ethical is not about asking creators for permission, or even paying them. Shutterstock and Getty, mentioned in these articles, made their own AI model of their content, made by the same people that think these companies might win the legal battle for them. But they're the ones that you'll have to fight as well if you want to stop all this copyrighted material to be used for training
Look at what Adobe are quoted as saying in the above article:
Hayward also noted that Adobe Stock contributors who submitted AI-generated imagery would qualify for Adobe's 'Firefly bonus', which it paid to contributors whose content was used to train the first public version of the AI model.
So when companies like Midjourney and OpenAI use massive datasets of imagery scraping the internet for everything they can get their hands on, it's "theft" and "unethical". But when Shutterstock, Adobe, and Getty Images do it to their own content creators, that's fine?
But don't worry, if Adobe used your images to train their AI, they'll throw a few dollars at you and call it a "bonus". That will make up for losing your entire industry. We know it can't possibly be more than a few dollars because otherwise they'd have bankrupted themselves.
It's sad that artists (and in-denial composer producers) are thinking they'll be fought for in court, when really these companies have zero interest in fighting for them and are currently (or actively) already betraying them. They care about THEMSELVES being replaced by AI, not you! They cared that someoneELSE "stole" your work for training their AI. Like Adobe, Universal Music will do the same thing. They'll talk about how much this harms artists on the one hand then casually train their own AI on all your work and say they're the ethical ones and expect you to be grateful. Here have $20.
It should be even more insulting because unlike the AI companies, these company's are the ones claiming it's theft and bad for artists, yet they'll go ahead and do the same thing anyway to their creators and act like it's effectively different.
Lets say you have music published with Universal Music. How much would you need if Universal Music made an AI like Udio (only better because it's the future), trained on every track in their collection, and suddenly you find you've been paid a hundred bucks or something for all your work to be used in the training?
The best outcome for these companies is they get those like OpenAI to license their content to them. That's why they're so casual about making their own models, they know the court's legal case isn't going to actually make THEIR models unlawful.
The faster creators give up on this fantasy that they can stop this the better off they'll be. If you worked really hard and were really successful, you might be able to marginally slow down a company or two for maybe 6 months. By the time these cases even reach a legal precedent open source will have already made it impossible to go back, even if you wanted to.
---
There's plenty more reasons why the courts won't decide this the way you want them too, partly because artists and content creators are using arguments that ensure they will. By focusing on the output itself, how good or close it sounds, ensures you have to lose. This is because IF your premise is that the training data is unlawful, the output must be entirely irrelevant. It can't make any difference how good it sounds, or how close it sounds to any traditional sense of copyright infringement. It must be that no matter what it sounds like it's ALL copyright infringement. But they're not!
Strike 1.
They only cared about the AI training data until it came for their industry, and until it was good enough to be a threat. Where were they for the couple of years artists were losing their shit, saying the exact same things about Image Generators? Again only proving further that they evidently only care about the output, IE, how it actually sounds.
Strike 2.
The premise they don't realize they continue to set up here is one where the logical conclusion is that legally these company's must make sure they take sufficient steps to make it impossible for their model to generate something that actually infringes copyright.
Strike 3.
I see none of them mention or account for user uploaded image references or source images or Open Source. All they do is act morally offended and talk with strong words about how it's definitely theft, while insisting legally it's an "open question" and there's still a chance. Where are the lawsuits to ban open source AI? Facebook are currently on a roll releasing open source AI models. I don't see them advocate for any lawsuits against any of it. All the LLM's are trained on uncleared copyrighted work as well.
And these corporations are the biggest richest company's on earth, even excluding companies like Blackrock and Vanguard (which own each other) that basically own all of them and essentially the entire stock market. The "powers" that be want AI to continue, and there's absolutely no way they'll ever restrict development or criminalize it to the degree they'd have to, in order to stop this from happening to artists. All the major AI companies would have to junk all their models, not allow custom image inputs, outlaw Open Source development, and make all AI generations already used to be by definition copyright infringement, and really criminalize having any Open Source models you may already have on your computer.
Last edited: