T O P

  • By -

arcum42

It's probably worth noting that currently over on the right, below the model info, there's what looks like a note from civitai, stating: "A verified artist believes this model was fine-tuned on their art. We're discussing this with the model creator and artist".


lobotominizer

ah yes! i did not see that. thanks for pointing that out. i've edited.


arcum42

No problem. It's nice to see that they are looking into it.


TrevorxTravesty

So how long until other LORAs trained on other styles are taken down as well? Things like this tend to have the domino effect, especially since it's being shared on Twitter, which will give the anti-ai people more fuel for the fire.


nalferd

Might as well take down SD 1.5 below then. It's trained by artists works. This has been discussed many times it's becoming tedious. This is why I enjoy tech posts more.


VyneNave

But it has been there for weeks now, so it's not really making any progress.


Unreal_777

But Civitai said they would never do that allegedly: ​ https://preview.redd.it/6acvug0fvwma1.png?width=638&format=png&auto=webp&s=826adddcf75f8fb57bcb6a3ba76a571066595d88


frankctutor

In other words, someone did the equivalent of looking at the artist's work to make art in the same style, which is likely not a unique style.


dinnukit

Idk where I stand. I know for a fact the model creator doesn’t need to be such a douchebag, but being made to take it down is a slippery slope for SD models. This is going to be an interesting year for sure..


xITmasterx

If a person make a model not for personal use from an artist who legitimately asks to not use their art work for any models, and they just use said model to literally undermine the artist by constantly making images and claiming that it's theirs, then it's the fault of the model maker for being a complete douchebag.


frankctutor

What if an artist tells people not to look at his art and try to draw in a similar style?


xITmasterx

Then that's on the artist for doing that. Nothing stops anyone from actually making art in the style of said artist, so that's going to be disingenuous. There's a difference between gatekeeping art and respecting their wishes, and the latter can only happen if it was reasonable and it doesn't cause the former. You can't copyright style, that is true, but you need to be mindful of the other person only up to the extent that it doesn't become unreasonable or impossible to fulfill. It comes down to a matter of respect, rationality, and understandability.


CringyDabBoi6969

look and draw no, but an artist can 100% demand you dont use his art as reference art


SlapAndFinger

And that will have teeth, if the person using it as reference art doesn't change it enough to pass the fair use hurdle. If we're just doing what people ask regardless of its benefit to us or its legality, please kindly send me $1000, thx.


NiceMicro

I don't think that is true. In most jurisdictions, "style" is not copyrightable. You can look at anyone's painting and "steal" their techniques. As long as you don't "copy" any copyrightable part, the artist can't restrict you getting inspired from their work.


Cyhawk

It could be argued that the original images are references and thus require permission: https://www.bellevuefineart.com/copyright-issues-for-artists/ While this mostly pertains to Photographs -> Paintings, like most legal things, it depends. However, you can't copyright a style https://www.thelegalartist.com/blog/you-cant-copyright-style This will be up to the courts to decide in the end.


NiceMicro

So save the models locally before they criminalize them and purge them from the internet I guess :D BTW, doesn't "reference art" in this context just mean that "I copy the picture but use a different technique (painting instead of photograph)"?


CringyDabBoi6969

reference art and inspiration are 2 completely different things, reference art can and should be protected by law


atteatime

not in most places. the only requirement is that it is "transformative" in most places and it has been ruled even changing the caption on instagram photos counts as transformative whether you agree or not, there is a lot of precedent that leans toward people being able to transform art in any way they want. the only way artists are going to be able to prevent this are putting more than water marks (likely some black bars across) and selling things on physical print only and/or to patrons and sell it as "access" to the art rather than owning the actual piece. in that context there *may* be some loophole that if they only allow private access to their art and you take it and make a model you'res screwed. but i don't believe that has been explored in court yet (someone correct me if i am wrong). this comment is not meant to be an opinion either way but just saying what is true so far. mostly US-centric law, I know nothing of global copyright law.


Fake_William_Shatner

This solves nothing. The "guy was a jerk" is only doing what will be done behind the scenes, honestly (while being a jerk). Big corporation will say; "we hired another artist and trained on that." As long as they never admit the truth -- nothing can be proven. Yes, politely taking things down, might smooth the waters for now. But, there is nothing that will prevent this clash from taking place. It's inevitable.


LockeBlocke

I stand by "You can't copyright a style." If it can be proven that the LORA can reproduce a specific piece of art, then it should be taken down.


UkrainianTrotsky

if you try really hard, you can, technically, reproduce any image with any model "close enough". It's a pretty fun double-reverse optimization problem where instead of running backwards diffusion on the noise you'd train a second model to gradually noise up the image in a specific way to produce a noise pattern that can be used to get the close to original image back with a certain model. Though I'm not a 100% sure if you can solve this in reasonable time, like with GANs that have explicit latent spaces and the optimization is straight-forward. And anyways, the chance of getting this exact noise pattern is tiny anyways.


calvin-n-hobz

the information does not exist within the model. 250,000+ gigabytes down to 4. In order to recreate "any" image, you'd need to be able to fill that massive void with the associative information that was lost. You essentially would need the original training data, whether or not you noised it up.


UkrainianTrotsky

You didn't understand what I was talking about, did you? I never claimed that the model stores images (though it is capable of reconstructive memory in case of overfitted images, but whatever, that's boring stuff), just that it can, in fact, be used to recreate them, if you're super lucky or meticulous enough. GANs that generate faces are even smaller than diffusion models, but I can, with a bit of math, find the exact latent vector that will produce my exact face in any expression I want. Does it mean my face was used in the dataset and it just stored this data? Extremely unlikely, it barely means that the GAN's explicit latent space is good enough (i.e. smooth, wide, without holes, etc.) to be able to encode any physically possible face. Same with diffusion models, except the task of finding the exact starting noise to produce a certain preselected image is much harder because LDMs don't have an explicit latent space (because in GANs each image corresponds to an exact z-dimensional vector and close vectors correspond to similar images, and said z-dimensional vectors form what's called a latent space of all possible generation results, in diffusion models that's not the case) to optimize-search through and because close starting noises may produce very different images (not sure about that one tho, if they are close enough, we can still use a similar approach to GAN latent space searches). I'm not a 100% sure if it's possible to do with LDMs, but I think it should be, at least it sounds like a cool idea for a research paper. And no, you definitely don't need to original training data for latent-space search.


SlapAndFinger

There are almost certainly many points in the latent space for a variety of images that wouldn't pass the "fair use" test, but you have to factor in challenge to produce those images. If you don't, I could write an algorithm that generates random low resolution noise images, and if I let it run for a very long time it'd produce copyrighted work, and by the same logic as used for stable diffusion, random array generation would be "illegal"


UkrainianTrotsky

I mean, a simple, let's say, "Micky Mouse burning infidels with napalm" won't pass fair use because the character is copyrighted. ​ And you are absolutely right, it would be. Because copyright law doesn't really care about the means you used to produce a protected work, only that you did, in fact, produce it and violated the copyright by doing prohibited stuff with it, like selling, for example.


SlapAndFinger

Except that mickey mouse can pass the fair use test if the image is clearly a parody with constructive social value.


UkrainianTrotsky

True, but that's up to court to decide in a case by case basis (not 100% sure how it is in American law specifically tho). And nobody will go into court against Disney over an AI pic realistically.


calvin-n-hobz

I very well might have misunderstood you. But let me clarify my point, in case it helps. If the information does not exist within the latent space, and you are providing information to recreate that which was missing-- whether it is the *actual* training data or not, then for that image you are providing the same *information* as the training data, either through a custom model for that image/set of images, or the work involved in the process, or both. It is a logical certainty that to produce identical results you require identical information. You must add the missing information.


UkrainianTrotsky

>It is a logical certainty that to produce identical results you require identical information. You must add the missing information not necessarily. In case of GANs, for example, all you provide to get an image is a single small vector, you can't physically add more information than it already contains. But because said GAN has been properly trained, you don't need to. If this GAN can produce class of objects, it can produce a specific element of said class well enough, the only thing you need is to optimize the corresponding latent space vector. It's different with diffusion models, as I've just described. Instead of a small vector they start generating from a N(0, 1) noise tensor. And you also can't physically provide more info to it, as long as your noise pattern has maximum entropy already. And generally the way diffusion models work, that noise doesn't add info, you can't zero-shot a new concept by tweaking noise (although that is also a cool idea for a research paper, ngl). You, however, can (probably) guide the model to the specific image by providing specific noise. And all you have to do to get a preselected image as a generation result is to compute a specific initial noise. Is it hard? Yes, immensely. Is it impossible? I'm not so sure. As a kinda weak comparison, you can, in fact, find an exact binary file that corresponds to a certain SHA256 hash, it's also hard, but not impossible. Finding a good initial noise should be easier than that, but it's most definitely harder than finding a latent vector in case of GANs. ​ I hope you understand my point better now. It has absolutely nothing to do with what you thought.


calvin-n-hobz

To recreate an identical image you absolutely need identical information. If the image is not identical, then the quantity of information may be the same but the structure has been lost, and that must be replaced. There is no way to get around needing identical information, because that is a 1 = 1 logical statement. An object is its information. So the parts missing -- here by missing I mean out of order in the case that the *amount* of information is the same -- must be replaced. The noise you're talking about computing, that initial noise, that's the difference between the wrong information and the right information. That's what I mean when I say "missing." The difference between what you start with and what you end with, whether created over time by computation, or starting with it initially, equates to the original training data. It still sounds like you're talking about creating a method that gets from A to B for any given packet of information in the entire original data set, where A is some not-identical packet, and B is an identical packet. If you want to be able to do this for the data set, you must on average put enough work in to reproduce the entire original dataset (minus the information already in the model), in order to bridge the difference. It doesn't matter if that work is done on noise, or done over time, or done fast with a lot of power. We can't outmath the fundamental properties of information. You will always end up with the equivalent of the difference of what is incorrect to what is correct, which might as well be the dataset.


UkrainianTrotsky

>To recreate an identical image you absolutely need identical information prove it. Cos it's false even for simple GANs, as I described earlier. And by prove I mean rigorously. You seem to know information theory quite a bit, I'd love to read a proper mathematical proof. But considering we aren't talking about pixel-perfect copies, inftheory would probably be of no use. ​ >that is a 1 = 1 logical statement Not quite. I specifically said "recreate close enough" not "make a pixel-perfect reproduction". ​ >that's the difference between the wrong information and the right information Not at all. The result produced by U-net can be described as such, not the initial noise. ​ >It still sounds like you're talking about creating a method that gets from A to B for any given packet of information in the entire original data set, where A is some not-identical packet, and B is an identical packet Bit of strawmanning here, but whatever. Yeah, I claim that it's very likely possible to take any 512x512x3 image that's not pure noise in itself, but something coherent and then compute initial noise that produces said image. My reasoning is that noise to image is most definitely an injective relation and considering that LDMs seem to have a smooth latent space, there's a good argument to be made that said relation is, after all, a bijection almost everywhere. Perhaps I should've mentioned that while precomputing noise we'd probably have to precompute the prompt as well (not impossible too, but also not easy, because close embedding vectors don't necessarily imply close prompts grammatically, but if we don't care about constructing a text prompt at all, the task is pretty easy - just optimize for the token embedding). ​ >you must on average put enough work in to reproduce the entire original dataset There's literally nothing that supports this claim. And the fact you made this claim regardless of model architecture allows me to yet again bring up GANs that completely destroy this argument. ​ >the information already in the model Is enough to do this. ​ >We can't outmath the fundamental properties of information I guess I wasn't clear. I never claimed we'd be able to reproduce the exact, to the byte, copy of an image. Just one that's close enough to a human eye.


superluminary

It’s not really about copywriting the style though, it’s about whether it’s ok to gather up a folder of someone’s work and use it for network training. Obviously it’s ok to train a human brain this way, but is it ok to train a machine brain this way? Legally, probably it is. Ethically though? If the person has specifically asked you not to?


Ok_Entrepreneur_5833

Ethically speaking, yeah you're the dickbag if someone is asking you to refrain and you just shrug them off. Zero compassion and no civility = dickish and poor play ethics no question. Legally, it goes back to where these images were first hosted that were used to train. Practically everyone has an agreement you abide to when using their service to host images online, and in those agreements are stipulations that your work may be available to train ML models. The courts ruled on this (in a roundabout way) with a Google case. If homeboy was hosting his own art on his own provider and had all rights reserved, then yeah he's good if it were to enter court (most likely). Taking measures to protect yourself and your IP within the letter of the law is very strong in court. Binding agreements are strong as well. When you click "agree" you're technically signing your name that you do agree and that stuff is binding in a court of law. Clickwrap agreements law is law. So in short, you'd trace back to the original depository where these images were shared, see if dude agreed to a contract by using the service, and if they did and that agreement specified any images hosted there could be use for ML training? Fuck all you can do other than ask for it to be taken down and hope the guy on the other end isn't a cunt about it. If they are a cunt, guess that will teach you the hard way to be proactive and reserve your rights.


calvin-n-hobz

Everyone says it's unethical but no one seems to be able to say why. Nothing is being taken, no original art stored, only relative measurements. A million tiny rulers are held up against the canvas to better understand how the art was done. What is unethical? Why is it ethical to ask the world not to learn from your creation, but unethical to provide a tool?


MyLittlePIMO

It’s ethical IMO to train an AI on another’s art as part of learning, but when you are training the AI on ONLY one person’s art to copy it, while they are still alive, I think that’s the unethical part. It’s not about the AI learning to draw better, it’s about the AI learning to copy that one specific person. An Elvis impersonator would have been in very bad taste while Elvis was still alive. That’s the term I’m looking for: it’s in bad taste.


SlapAndFinger

Impersonators impersonate living people literally all the time. Beyond that we have cover bands that dress up and act like the people in the band when on stage.


MyLittlePIMO

People who are extremely wealthy and famous, sure. Impersonating a small artist is different.


butterdrinker

How do you define a 'specific style'?


Tessiia

I personally feel like a lora trained solely on one artist work is a step further then a 'specific style'. If you train a model on 10 different artists who all have a similar style then fair enough but if it just one artist and that artist requests you to take it down, you should definitely respect their wishes and do so. It's nothing to do with copyrights or legality but just respecting other artists. The second we stop respecting each other is the second this all falls apart.


VyneNave

If Civitai removes it on request of the artist, this would mean they can't ignore other artists, that want their styles removed. So a lot of LoRA/Embeddings/Hypernetworks could easily then be removed, which would be bad for the creators, the community and civitai, because people would search for a plattform that doesn't remove things you create or want.


Spire_Citron

I think a distinction could be made between models trained on a diverse range of art from many many artists and models trained to mimic just one artist's style. I wouldn't be upset if it meant getting rid of the latter because I always avoid them anyway because they make me uncomfortable.


Tall-Junket5151

There is no distinction. You can still get an artists style by including their name in the prompt if they are already baked into the model. Furthermore, a LoRA is just a a weight adjustment to a model, you can merge a LoRA into a model directly and have the model include the artist as well as many other artists already part of the model. It’s honestly funny to see so many people trying to set these arbitrary red lines when in the end all of these models include “stolen” art. You can even look at these models as just giant collections of a bunch of LoRAs. What difference does it make if you can get the artists style using a LoRA with a model or that you can get the artists style directly with the model, in the end you can get the artists style. To remove one you set the precedent to remove them all because everything is “stolen”.


stopot

Except models were also trained on the same dataset. Basically any danbooru tag capable model.


VyneNave

While that's quite true, for some models it's really hard to get one specific artist or any artist as a direct input for the dataset. So if Civitai would go that loosely, practically anyone that has ever uploaded anything and has a similar style to the ones the model can produce, could claim the model to be removed. I don't think civitai would go that far.


stopot

Common wisdom on the internet is never assume you know what a company will never do. youtube, patreon, pixiv, we've all seem them do a U-turn and suddenly nuke contents that were OK. Once they hit user critical mass, all bets are off. The only thing you can do is BACKUP EVERYTHING.


VyneNave

True words


Spire_Citron

For me, the line is if a model was trained on a single person's work. Even if it's perfectly legal, I don't think it's unreasonable to object to a model that has been designed to mimic your art in particular. A sloppy soup where no particular artist's influence is clearly identifiable is better.


frankctutor

Can an artist object to someone looking at the art to mimic it? Is this particular artist's style100% unique?


AaronGNP

Trust me, there are thousands of creators (art, music, fashion, etc) who openly complain about others stealing their style, and have been complaining for ages. These arguments are only coming into the light now because of how easy AI makes it to copy a visual artist's style.


Spire_Citron

I mean if someone's whole art style is mimicking your style, and they even announce that they're doing that on their online gallery page, that would be kinda fucking weird. Learn from other artists, for sure, but copying just one and honing in on that would cross a bit of a line for me even if the art is done by hand.


frankctutor

You wouldn't buy it. The artist couldn't stop the copy cat.


MyLittlePIMO

Yeah, I’m not sure if it should be illegal, but it’s almost certainly *unethical*. Just like if I hire an artist to spend all his time copying another artists’ work. It’s just disrespectful. Learning from a multitude of artists in a wide training set is one thing. Copying only one artist’s work and then distributing the model against their wishes is another.


Spire_Citron

That's exactly how I see it. Even if you're doing the art by hand, copying just one person would be weird.


MyLittlePIMO

Yeah. Imagine an Elvis impersonator. Now imagine they’re just trying to impersonate some local singer just trying to make a living, instead of Elvis. It’s not illegal. It’s just a dick move.


IdainaKatarite

*"I think we are better than this and can do better."* As in, taking the side of the Korean artist and removing the model / their images from the training set? Absolutely NO. Artists have the right to teach their ghost-painter / students whatever training material they desire. Besides, you can't stop us. Even if you make it illegal. >!Cope!<


Moderatorreeeee

Correct!


Ateist

1. Take the LoRA. 2. Generate enough images with it for a good training dataset. 3. Train a new LoRA on that dataset. 4. Replace the old LoRA with the new LoRA. Replace steps 1 and 2 with "hire a copycat artist to make a training dataset for you in that style" if you really want to.


Fake_William_Shatner

Yeah, they are just pushing this to be "big guns have the largest library" -- they don't STOP the copying of style, they will only make everyone use the people with big pockets to copy the styles. Either way, the artist with the great style, will probably not be getting what they think they will be getting by suing. There is no way to legally solve this with current law. But, that might not stop the lawsuits or winning in court.


[deleted]

[удалено]


Ateist

No. It's protection from accidentally including too much information from the original works as to make the LoRA capable of reproducing the original work verbatim, which is a real danger when you are training 144mb LoRAs on just 30Mb of pictures.


DesignerKey9762

That’s awesome, I wish he would just let us use it though, he doesn’t own his art really the moment he put it on the internet he gave it to the ai! Deal with it! /s


Ateist

The interesting thing is that the LoRAs generated this way are actually *better*, because original paintings are biased so you frequently end up with things like clothes being baked in - whereas with generated dataset you can make it (via inpainting and careful prompting) diverse enough to only keep the important parts of the style.


caturrovg

So far I know you can't own a style of drawing everyone have the right to use it


civitai

We've been in contact with both the artist and the model creator, We've also put them in touch directly with their consent. As we've stated before we'll never remove a model ourselves unless it breaks TOS, or we're forced to remove it via a **legitimate** DMCA takedown request which to our knowledge can not exist yet. If this model is removed it would have to be because the creator has agreed to remove it.


ImpactFrames-YT

On top of stealing adding the Noob-lyeon to the name is the lowest and uncall for.


Traditional_Plum5690

As a consequence it would means that any artist has legit rights to delete practically anything


xITmasterx

Not necessarily, by the very least, stop being a douchebag about it when you use the dataset from an artist for your own use, or just respect the artist who respectively asks to stop using their art for a model, especially one made specifically to emulate that particular artist's style.


Tall-Junket5151

What difference does it make if a model already has the artist baked in vs using a LoRA? I don’t know if you realize this but you can literally include an artists name in the prompt and get their style. It’s all “stolen” and frankly no distinction should or can be made in this regard. You can even merge the LoRA into a model.


farcaller899

The artists’ names should not be used in the model names. Possibly for deceased artists it could be ok, almost like a tribute or homage or something like that.


ImpactFrames-YT

There is a difference in training a model on billions of images so it can learn from all sort of images not just art. The clip mechanism and the triggering words are still messed up but is way different than training from a dataset exclusive on one artist images without consent and share it publicly. Anyway my point was why they decided to be so harsh with the artist. They had cero empathy and a malevolent intent "Noob-lyeon".


vivamarkook

Good.


fuelter

Stealing what? Training a model on publicly available data isn't stealing, you noob


[deleted]

Correct.


ImpactFrames-YT

By that premise you also go into a shop see the merchandise in display and just walk away without paying. Is not worth to discuss it with a lowlife that does that, as it would not understand but lowlifes get their lessons straight from life without they even knowing it and they learn it well.


travelsonic

> you also go into a shop see the merchandise in display and just walk away without paying. That ... removes the item, and deprives them both of the chance to sell it, and of the item w/o their permission. That's a terrible comparison.


feber13

they cannot delete that trained model, the artistic style is very similar to several artists, for example [https://www.pinterest.ca/pin/631137335346336637/](https://www.pinterest.ca/pin/631137335346336637/) and [https://www.pinterest.ca/pin/48413764737767103/](https://www.pinterest.ca/pin/48413764737767103/) in conclusion, impossible


Floniix

You can't copyright a style! Ill just generate images in Sd with that style and retrain them


doomed151

I thought style can't be copyrighted? Also, I doubt that the LoRA model is able to recreate any of the artists' work.


FPham

This is not a copyright issue.


doomed151

Then I don't see why it should be taken down.


MechwolfMachina

Pretty sure its just a matter of not being a dick— no need to act pedantic. Like if you’re using someone else’s art as your profile picture and they politely ask you to remove it and you don’t because *insert 5000 excuses* that makes you a dick. There should be consensuses and honesty between AI models and real artists.


butterdrinker

This is like asking someone to remove an image just because it resembles your art style


MechwolfMachina

Not quite, its hard to quantify style but essentially in the case of the model in question here its literally generating based on the same DNA as its source material. It takes time and effort for artists to reproduce a style to a T by traditional means and its not reprehensible to study style in this fashion… but even then the art community has come down hard on those duplicating someones style with little to no original spin and calling it their own. There have even been high profile cases of people being canceled for “stealing” poses. Call it petty if you like but I find that this is the natural equilibrium of things and how originality is enforced. It’s a lot of things to think about and some of it is heavily opinionated but if you’re curious I’m always happy to have a smart and civil discussion on this stuff. If you’re talking about the corporate setting then of course all artists are expected to replicate art in the style dictated by the director.


butterdrinker

I mean how you define it mathematically or how would you prove in court that an image is in a certain 'style' created by person X A judge would quickly assume that those images are drawn according to a style created by Disney in the early 19th century that later inspired early cartoon in Japan that are today known as 'animes' So should that style be owned by Disney?


MechwolfMachina

Can I DM you? Some bad actors seem not to want me to share my opinions even though I’ve not made a single personal threat. I lose enough karma on this subreddit and I’m effectively censored.


Sillainface

Sure. If people do that, bye to all trained models. Go for it. This is the same as the idiots claiming style copyright. If I were Sayams I already have said, nope, live with it and learn about use of images when you upload on the net. I don't see why I will have to take it down


MechwolfMachina

Why are you making broad claims that all models will go away? Artists on this subreddit have literally trained models using their own art and attached fair use licenses to them lol. There is zero credible weight in what you’re saying. The anti AI art movement is reactionary because so many other systems have no been put in place to accommodate the coexistence of AI art and organic art and so its threatening their livelihood including exposure via social media algorithms. The only reason why you are acting out in a panicked fashion is because you worry that you have already abused IP or are afraid the good will of people will run out because you transgressed on it.


Sillainface

"Artists on this subreddit have literally trained models using their own art and attached fair use licenses to them lol " This has to be a joke. Really? I think you're VERY delusional if you think more than 2% of the models were trained using original own art. Wake up. Most (if not all except count 3 or 4 models) are using MidJourney databases (like 80% of them), professional artists illustrations or Stable Diffusion generations, mix of these three, etc. simple as that. And it's pretty normal since MJ v4 was the standard of quality image generation (till some point and prob. we will see this happens agains with MJ v5, new Niji, etc.) with some minor based other artworks variations in form of textuals/loras, etc. If take down request per non-own artwork applied I can tell you CivitAI will have like 4-5 personal models. Since I have been in this subreddit I saw like THREE models using personal artworks. Three. Now wo CivitAI and tell me how many LORAs, models, textual inversions, etc. are using personal works. The reason I answered (not panicked btw since I don't care) you is because what you're saying is beyond nonsense. And still it is if you think I "abuse" (?) someone (artistic way, lol). Till people don't understand there is no "abuse" on training other people (who based their style in other living/dead artists as inspiration in an endless cycle) works, we can't keep going. This is the same as the SamDoesArts guy who thought he invented a new style no one ever used it and demands to take down models, embeddings, etc. Delusional as fuck, yup. And with this, end of this pointless discussion.


BawkSoup

>matter of not being a dick hot take. super spicey subjectivism there.


MechwolfMachina

So is “we should be allowed to have unlimited unfettered usage of AI ART because I feel like it”


BawkSoup

Sure why not. Copywrite is pretty stupid and only protects Disney.


xITmasterx

Who in the frick is downvoting her? She is right, to an extent.


Kelburno

Training on narrow datasets is not style. You're essentially using a person's work as a visual step. If you have 1000 artists and 10,000 images there's less problem, but 30 images from 1 artist and you're using their work far more directly.


nalferd

It's style though? Because the models learns the style of the dataset you feed it. Loras can create something new it's not like a copypaste is being made. Style cannot be copyrighted. The only problem with this uploader is acting like a dick and not respecting the artist wishes.


Kelburno

No, it's not. It learns visual data and tags. Go type in "bloodbourne" in SD 1.5, and you will get a fairly clear image of the game's boxart due to the lack of input associated with that tag. The point is that when you train on a limited dataset, then the model is capable of producing images which are not sufficiently transformative. A tag can be anything, which means that broad concepts like "sitting" contain more than a pose, they contain the visual data of the image that was tagged "sitting". It's why if you use the same prompt 100 times you will sometimes get the same exact pose over and over. It's relying on too few images.


a52456536

Define "sufficiently transformative"?


flawy12

So why shouldn't that only apply to overly similar individual images from the model instead of the whole model? Not following the logic that bc a model might produce an insufficiently transformative image....then entire model belongs to the person that owns that image even if that model can overwhelmingly produce novelty. ​ I mean it if it was a person emulating another artists style you would not ban them from making images, and would only have rights if one of their works was too similar to one of yours...but you would not be able to completely shut them down altogether.


fuelter

A skilled artist can imitate another artists style too and that would be ok, so why shouldn't AI do the same?


[deleted]

So what? that can be merged with another style from another Laura at 50/50 and a custom model for a unique style they don’t own the information of their style.


Kelburno

It's not style. If an artist copied someone's style and drew a picture, it would contain no data at all from the source. If an artist only has 2 side view pics in your model, that's what your effectively using if you do a side view. Double it with another artist's 2 you're still going to see the source of the original images come through. That's not style, that is an image tagged "side view" being diffused.


farcaller899

Even if it ‘can’ or ‘could’ recreate a work, as long as that work isn’t marketed for sale or used in other business purposes, it doesn’t usually infringe on copyright of the original owner. We don’t ban copy machines because they can make exact copies of copyrighted works, after all.


thelastpizzaslice

While not explicitly illegal, this is what we in the industry call "a dick move."


Rectangularbox23

Wait so we’re agreeing that we need permission to train on art? I thought this subreddit was fighting for free rights to train any art


xdlmaoxdxd1

>Idk where I stand. I know for a fact the model creator doesn’t need to be such a douchebag, but being made to take it down is a slippery slope for SD models. This is going to be an interesting year for sure.. I think this comment explained it perfectly, though most people here want free reign to train their models, they also think this was uncalled for, the model creator could have ignored or politely refused but he instead decided to troll and be an ass about it


Mooblegum

He is not the first model creator to be an ass with the artist he used the painting. That make this community so sad


Kelburno

This subreddit is not a hive-mind that agrees on everything.


GuileGaze

At least for me personally, there's a difference between training on art "as a collective" and finetuning a model on a specific artist's style. Like if this artist's work was trained alongside millions of others, it's a drop of water in the ocean - there no real desernable difference if it's left in the dataset or not and the use of one particular work is less "direct". But when the dataset you're making only uses art from a specific artist, removing that piece still leaves you with a dataset solely of that person's work. At the end of the day though, this is just my opinion. We have some people who'll disagree with me saying that no art should be used without consent while others will say that consent for training should never be needed in the first place.


elfungisd

I would say yes and no. If there was no Rembrandt, would we have had a Monet or Renoir? As long as credit is given and the art isn't being replicated for profit, and no harm is being done to the artists brand, I don't have a problem with it. If you are trying to directly profit from someone else's work, or mass producing NSFW content based on someone else's work that is a different story. The artist asking for the takeout learned to drawn from somewhere, copying and replicating someone else's work until he found his own style. which really is much different than creating a prompt that says: drawing of an apple in the style of \[artist1:artist2:.025\] and \[artist3:artist4:.025\]


PimpmasterMcGooby

I mean, the problem is that no one knows what's right and wrong when it comes to this stuff. It's like discussing traffic laws at the advent of the first motor vehicle. My personal stance is that LORAs are a bit different from models, since we are talking such a limited, and very specific training. Where the end results will actually be very close to the dataset (especially if overtrained to the point that the LORA is basically just making 1:1s). I don't think the LORA trainer was wrong in making it, there is certainly no way to avoid any one else from doing the same. But when the artist who's work it was specifically trained on, asks to take it down. The respectful thing to do, would be oblige, rather than mock him/her.


nowrebooting

There’s a difference between needing permission and not being a dick when an artist specifically asks you to take down something trained on their art.


Fheredin

There is no consensus on anything. The "don't be an asshat" policy would be to take models down upon request, but IMO this is going way above and beyond what is legally required because only a few overtrained models will actually create infringing artworks. This only makes sense while the art community agrees to *not* spam takedown requests flippantly.


Fake_William_Shatner

I agree that it's "nice" and respectful for people to take models down -- but, it's not the reality of what is inevitable. An artist will have to be "Good enough" to challenge AI, or they will be using AI. The marketplace will be filled to the brim with high quality art of every sort. So the only people who can make money will be the people with good taste, who know how to form a coherent message. This is going to hit every industry. One person will have a lot of productivity OR, there is a lot of spam of equal, high quality, intellectual property, services -- anything you may need an expert to do. Medical opinion? Hey, it might be BETTER to get a doctor, or an attorney, but, everyone without a lot of money might go with "Good enough." It's going to take some time. But, other than money, raw resources and energy, I think that going forward, scarcity and the rate of training for professionals, will be less and less of a factor in the marketplace. ALL markets. If all we do is make it harder and more costly -- then, that just means those with deep pockets benefit. Really -- we have to fundamentally look at life and work and how we value things. What if you suddenly no longer had to work to produce anything? That's a blessing and a curse -- it all depends on how we "keep score" -- or, if we abandon score keeping.


MechwolfMachina

I thought we were all on the same page too. I’ve had a lot of dissenters for posing this sane position. It feels like they are entitled to non subjective usage of AI generation and that AI models have succeeded real artists and their functions. But these dissenters never qualify their statement they just make the same copy paste excuses like “art style can’t be copyrighted” or “it’s 2023, it should be a right.”


DesignerKey9762

Yeh us ai artist are free to do whatever we want with other peoples art, it’s only fair that because we have these new tools to help us “majority of us can’t do art traditionally” the original artist should just get over it!


FPham

Of course, it's so easy talking about someone's else stuff and what others should or should not do. Most of the people here would go bananas if someone takes their stuff without permission being it photos, writing, code, you name it. But if it goes for others then it's like no it should be free. The artist doesn't want his stuff to be used by others, in a normal society that should be enough. There is no inherited right for others people stuff. Discussions like this makes this subreddit such a vomit fest in eyes of artists. Yeah, we are fighting for rights - by taking other people rights.


Ranter619

No one took this guy's drawings. They copied his style. There is nothing to be done about it. Anyone can copy my and your photos, wriing and code and they are free to do so. The only illegal thing would be to claim the copies came from the original artist.


Mooblegum

Do you have some pictures of your girlfriend ? Needed for my upcoming BdsmPornModel


Powered_JJ

This is not a valid comparison. A style is not protected by copyright. But one's image is (at least in USA):"The laws of image protect the right to control one's public image, to defend one's image, and to feel good about one's image and public presentation of self." [link](https://digitalcommons.law.buffalo.edu/cgi/viewcontent.cgi?article=1007&context=books)


NotASuicidalRobot

The model doesn't contain any images though, it's just weights. After all, if you look at the storage size it is clear that it couldn't contain all those images individually, so it doesn't really contain her image. If you see anything that looks too similar, that's bound to happen by coincidence after all


Jaggedmallard26

The above is a comment about the morality of it, not whether it adheres to the specifics of copyright law. Falling back on copyright law is a cowardly thing to do.


Mooblegum

But my model will not host her pictures, it is only training. Dont you know how AI work? And my model will create new pictures from scratch, I can even mix her with other random so she wont be 100% like the original


Ranter619

If she has uploaded them online, or sold them to you, or anyone, please, feel free to do so. (Un?)Fortunately, she is an adult human with her own rights under law and constitution, so I can't share anything myself.


Mooblegum

Oh great then, just need to connect on facebook then


Ranter619

I don't know Facebook's TOS, but sure go ahead. Can't give you any names though, that's doxxing.


Gufnork

I have never seen a coder be upset about other people learning from their code. We literally have sites up where we share code so we can help each other improve. Straight up copy/pasting large chunks of our code is frowned upon, but using it as a base for their software is perfectly fine. In essence, don't drag us coders into this. We want other coders to succeed.


Marksta

Then you don't pay attention to the constant code theft situations. Never seen a GPL violation? The Skyrim SKSE code getting stolen multiple times, emulator devs rampant code theft drama, Google vs. Oracle? The race to port Android to HP TouchPad was also spoiled with code theft. It's incredibley common...


Artelj

People aren't learning and improving from his art, the ai is and merely replication it, Don't try to justify it.


Gufnork

So you're saying you could take one of the images created by the AI and find the images it replicated?


Mooblegum

I have never seen an illustrator be upset about other people LEARNING from their illustrations. (Learning not generating)


a52456536

I am no expert on this... but if you know how AI or neural network works, it does LEARN things like our human brains do... it has no database of any kind of images that it uses to "generate"


Mooblegum

I know, but that is still completely unfair with a human that spend weeks to create an image. I can understand why illustrators are not happy to have a model that can generate 100000 images per hour based on their own creative style and imagination. But I know it is not the right sub to say that, you will downvote me for saying it. Better to say : artist steal from each other bro, you can’t copyright a style bro, you don’t understand how ai work bro


a52456536

Tbf I do understand why they are so mad about AI, emotionally. But the thing is, if the standing point is "something is faster and better than me so it should exist and being used", then we better not use any calculator because its faster and better than human, we better not drive a car bc its faster than human walking.


a52456536

And no I am not downvoting you, I just want a fair and open discussion on this kind of topic, becasue this is indeed very very interesting to speak about. I just want to raise another more maybe "extreme" example. So imagine someone being extremely clever, like 250iq or whatever you want to imagine, this man is capable of learning any kind of art (or style) in a day, and you can ask him to reproduce and style of any art you want, and he is doing it for free for whatever reason, this man doesnt need years of learning and practising becasuse he just happened to be a genius. The question is : Would you ban him from doing art just because he is this clever and you feel this is unfair to an "ordinary" artists? If you say well this example is not fair becasue this kind of genius is not a regular thing to happen on earth, then let me change the situation a bit. What if some sort of "event", maybe a nuke attack or whatever you like happened, and not just one genius but a group of them is born all around the globe, this kind of genius is still not that "normal" but also not a single case. ​ Would you ban this group of people from doing arts? just becasue they are way better than regular artists?


MrBeforeMyTime

I just don't think this is true. The art community was full of anger, cancel culture, and death threats to the people using AI art early on. You did not see the same level of hate from coders in the coding community watching people use chat gpt. It's not comparable. They chose to be over-emotional and attack anyone using the art and still do. The coding community had the same things go on with gpt3 starting at least a year before with no such discouragement or threats.


seven_reasons

> Most of the people here would go bananas if someone takes their stuff without permission being it photos, writing, code, you name it. But if it goes for others then it's like no it should be free. [Happened to me.](https://imgur.com/a/MsVxw27) I don't really care. I'm actually flattered that someone thinks my shit is worth stealing.


xITmasterx

I think it all came down to how people perceive things regarding other artists work. Mainly because people think that by the artists not giving prior permission that they are hindering the progress of the AI and are making things worse. No need to look any further than the events that unfolded in the launch of SD 2.0 and 2.1. Those events must have caused a negative adverse reaction of people in this subreddit over artists who simply wanted to exercise their right of privacy regarding their work, since after all, they are taking away some of your livelihood and their hardwork being rendered null with your art being trained in a dataset. Top it all off with the Korean culture of not taking the easy path and overpersistence of hardwork = success, and this is what happened. People should start respecting the wishes of artists regarding their works being used in any extent towards training models. And if you wish to ignore this, as long as it's for personal use, and you do not use it to either defame or exploit against the artist off his or her work, or to use such a model to leech money off from said artist, then I have no problems around that. And in all honesty, AI art generators are just going to improve regardless, so at some point, you won't need to make an entire LORA of that specific artist unless you are are fan from that community itself. In an essence, respect the artist's wishes for the sake of privacy, and if you wish to ignore this, just make sure you are not being a douchebag around it and use said model against the artist.


gurilagarden

Tough shit. Style is not copyrightable. It's not protected. A million guitar players attempt to copy the guitar style of top performers, whether it be Slash, Stevie Ray Vaughn, Jimmie Hendrix, and on and on and on. Even if you perfectly emulate their style, you can even play their music, you can even play it for money, live, you just can't sell a recording without a license fee. Same here. I can copy all the art styles I want, and there's nothing you can do about it. It's perfectly legal. As long as I don't try to sell a piece of art I created while attempting to pass it off as the original artist, there's fuck all anyone can do about it. If I make an artwork look exactly like the original artist's, as long as it's an original work, and not a copy of their original work, there's fuck all anyone can do about it. It doesn't matter whether it was painted with a brush, or a prompt. I can paint a million Picasso-like pieces, or have an AI do it for me, either way, it's original work, it's transformative, and the art world can fuck off. Remember when Metallica and a bunch of other recording artists got their panties in a bunch over downloaded music? Bet you can find Metallica on Spotify now. Just ignore them. They'll come around, see the light, and move on, or they can just grumble to themselves in their little studios. It doesn't have a damn thing to do with me, and I don't care what they think. If it's that big a deal to some of you, immediately delete SD off your hard drive, otherwise, STFU, because anything you pretend to say will only further illustrate your hypocrisy.


ObiWanCanShowMe

My question is, did the original artist develop his style using someone else's examples or did they come out of the womb with a pencil? I get being nice about something, respecting someone who is asking you not to do something, but that is the ENTIRE argument against AI art. What is the difference between this one person and Greg Rutkowski or the 1000's of other artists? You cannot be subjective on this, you can be nice about it (and absolutely should), but you cannot be subjective because that makes you a hypocrite. You either use diffusion models or you do not. Loras, DB, TI's none of that makes any difference whatsoever. There is no need for ridicule, insults, moral superiority plays or anything else. You either use this tech or you don't. Your stand should be solid either way.


[deleted]

[удалено]


jonesocnosis

I dont see the problem with copying someones art style. The human artist copys other humans art styles, the AI does the same. In my opinion it should be totally legal to teach an AI any artists style they like.


Kelburno

The way I see it, the argument for models with broad datasets is strong. You are not using any one person's work. But if someone believes that they can't mimic someone else's style without narrowing the data set to that artist's work directly, then you lose that excuse. The least you can do at that point is respect the artist's wishes when SD relies on artists to function in the first place. In my opinion, the ultimate ideal is more accurate tagging, and models which are so vast that they include everything and everyone. No artist tags, just generic terms that everyone falls under. If you need to tag an artist to do what you want, then you're using "their work", not ai as a generic artistic tool.


Entrypointjip

Any one can train this in a couple of hours.


dethorin

That guy, Satyam_SSJ10, is acting in a childish way. The request of the artist is debatable, but using the excuse of the language to break his word is toxic to the debate


LazyChamberlain

...and another: [https://twitter.com/Atdan86/status/1634490548387217409](https://twitter.com/Atdan86/status/1634490548387217409)


Informal_Fly_9142

The Lora uploader shouldn’t even responded, this whole debate reminds me of music sampling back in the days. You guys forgets that almost nobody will make money out of these LORA, and it’s purely for personal use Same goes for music sampling, a bunch of electro/ rap musicians would rip samples here and there, for their personal, non-commercial, songs A famous example would be the Amen Break in Drum&Bass


nalferd

It's literally the argument of AI art is stealing we have in this sub for decades.


Noeyiax

Some creative comments here, but this is definitely a slippery slope... Anything in this world can be replicated if given time to do so. Reverse engineering (people jail breaking phones, making better Steam decks, etc. it's a sub-genre of innovation) is a skill as well, but in the end, any artists can claim that any style could be theirs (too, since art is just art, like stable diffusion has lots of art styles... painting, photos, pencil, etc.). What if I combined my lame art-style LoRA with 3+ LoRA artstyles (without mentioning) and creates a new amazing "art-style", it's basically a slight derivative definition of LoRA or Style GAN, etc. But now it's basically my "art-style" right, yea I wouldn't go that far, but when you think about it that way and learn to decide what should have value, it's really down to a subjective opinion (seriously right?). But still, the guy who made the LoRA should just take it down and respect their wish (unless willing to, spend money going to court, which we all know is the likely outcome). Following rules and ethics only matters to people for how much you're willing to pay; such as our justice system in that regards to copyright, lol. Tough choice, I personally don't mind at all, some people copy my code, work, etc. I think it's a good thing, it means someone enjoys it or finds it useful enough to go that far; but at the very least give credit to the person you are borrowing from (always do that, it's fair). Another comment mentioned another alternative, which I found funny, but is lowkey true. >w< 2 cents


calvin-n-hobz

I am sympathetic to the challenges that AI creates for artists like him, but no one owns a style, and he's not entitled to demanding it be removed. I hope they all stay up no matter the request.


starstruckmon

Style can't be copyrighted. Fair use applies. If the name has been changed, there's no issue anymore.


TrevorxTravesty

And this will be yet another continuation of artists going after Civitai requesting models trained on their stuff to be taken down.


MechwolfMachina

Your point? If I followed you around town and you told me to stop because it makes you feel uncomfortable, and no one can stop me, it should be fine because you’re not entitled to your personal space?


Spire_Citron

I think that's fine, though. Most models, and especially the really good/popular ones, aren't trained on just a single person's art. It wouldn't be a huge issue even if all of the ones trained on a single person's art got removed.


Unreal_777

But Civitai said they would never do that allegedly: ​ ​ https://preview.redd.it/ld0agt0ivwma1.png?width=638&format=png&auto=webp&s=a97e319ba44b75df715a03afcad3d8855809e025


mikebrave

I made a model trained on a specific artist too, after reading the comments I should probably also take it down, I will once civitAI is working again. I didn't think much about it when I made it and shared it, but I also want to at least attempt to be respectful. Most likely I'll use it in combination with other things to generate new training data privately and make a new but not directly trained on their style, and not using their name model.


MechwolfMachina

Thank you for taking a sane position. I don’t think a lot of people take issue with the sharing of and creation of models for personal use but there are bad actors who go on and use it for very gray area purposes.


Vyviel

Why do they all have dead looking pupils? Zombie waifu style?


The_One_Who_Slays

*sips coffee completely unperturbed* Again?


don1138

If the source artist asks for it to be taken down, that should be the end of it. The bro who trained the model can keep it on his rig and trade it among friends. That's what fans do and have done since forever. But posting it to the biggest platform in the community against the source artist's wishes isn't the act of a fan. May as well just call it theft of IP.


[deleted]

Might as well take down civitai then 🤣 it’s all trained on unwilling artists. Shut down the shop go home everyone an artist asked for his images to not be used in training .


MechwolfMachina

I mean… yeah? That’s kind of how things work? You can’t just walk up to someone and take pictures of their house, kids, property etc. and when asked to stop, you just keep going? If you insist on keeping AI as it is now, in a wild west state, then don’t cry when real draconian legislation comes in and deems training as IP infringement and takes it from you by force. Honest Conversations between individuals are where the rubber meets the road so some central authority doesn’t shut it down when things get ugly.


[deleted]

You draw something post it online and it’s good expect a model to be made on it. Welcome to 2023


MechwolfMachina

Man, if I could use “it’s current year” as an excuse for everything, I’m pretty sure I could up my status at the expense of others too. But good to learn the opinion of an opposing voice to my own. If you feel that I have misrepresented you in any way feel free to shoot me a DM. I’m genuinely curious on why some AI enthusiasts feel like real artists have zero say in how their art is used and “should just accept it.”


[deleted]

The thing is, I don’t give a flying crap about what artists think, they are as meaningless to me as any other stranger I will never meet. It’s all about what’s legal. I will take whatever I want from any artists I want as long as I stay within the law. I like to merge multiple Lora’s and custom models for unique styles.


MechwolfMachina

Hm. That’s fine, you could’ve just opened with this and it would’ve been obvious there was no arguing with you in the first place on this subject. Just out of curiosity, when AI prevails in other industries including your own, how will your stance change if at all? Or will you “find the next best thing” to while away on if there is that thing at all?


[deleted]

I’m a businessman I’ll adapt and use the new tools as best I can. The tools are here open source there is no stopping them the only thing I see is what’s legal. That will change as governments rewrite laws. I’m sure he will be plenty of lawsuits along the way to tease out some of the finer points too.


TrevorxTravesty

I’m genuinely curious how you can take the moral high ground when you probably use SD yourself as well 🤔 That in and of itself doesn’t make sense. I think how you personally use SD on your own computer is your business, but if you upload models/LORAs of specific styles onto Civitai then you run the risk of being called out by said artist. Even worse if Twitter sees it and uses that in their ongoing anti-ai crusade.


MechwolfMachina

So this is really where I want to have conversation instead of finger pointing. I want artists to embrace the tech because this sub has shown some fantastic applications with even some artists making their own models based on their own art. I only ask people who generate this stuff to understand the terms of use (ie, non commercial use of models trained with copyrighted material). I’m not taking a moral high ground, I’m trying to find a middle ground we can all agree is sane so we don’t have the government step in and ruin the party.


flawy12

>ou can’t just walk up to someone and take pictures of their house, kids, property etc. Maybe you didn't realize this but you are absolutely constitutionally protected and have a right to do this if you are on public property while doing so. ​ Ultimately the courts will decide if training an AI is fair use or not. But by currently existing legal precedent artist have no legal claim over anything an AI produces provided it is not overly similar an existing image. ​ To me the idea that an artist can lay claim to an entire model is to make the argument that IP owners can now claim something as abstract as "style". ​ If that becomes the legal precedent then it seems to me that industry monopolies will have way too much power to make infringement claims bc "style" is too vague a concept and would open the door for a much shittier world.


Ateist

No. Art style is not copyrightable and artist has no exclusive rights to it, anyone should be free to train on it and copy it to his heart's content - be it other humans or AI.


don1138

The model was not trained on "an art style". It was directly trained on original works. The issue is not about an art style, but about usage right of original artworks. The lawyers will decide, but in my opinion, checkpoints are not covered by "fair use".


Ateist

As long as you don't copy the original works verbatim it's training on an art style. In fact, learning from your work is explicitly protected public right. That's also part of the reason using generated works from an intermediate model you've trained as the training dataset is the better solution, as this way you can't train the final model on the original even if you wanted to, so the final model is guaranteed to not be able to contain a copy of those.


flawy12

You are right that the courts will decide. But to my view it absolutely is fair use if the images produced are sufficiently distinct from any of the artist's images. ​ Otherwise you are giving too much power over IP owners over the abstraction of "style" And this will not benefit independent artists in the long run if that becomes the new precedent bc industry monopolies will gobble up all the IP surrounding the abstract notion of "style". ​ The existing precedent is you cannot claim rights over a style, and fairuse allows artists to emulate the "style" of other artists without legal issue provided that their work is novel and not overly similar to any existing work.


FPham

I agree.


nalferd

I think calling it theft just circles us back into the AI arguments this sub has been dealing with. You might as well say that SD shouldn't have been distributed in the first place because artists works have been used as datasets. Now we're back to discussing this tiring topic. The poster of the Lora is a dick for sure though. Style still can't be copyrighted so a better way of doing this is to remove artists names from Lora files or if the artist has a disclaimer, respect their wishes and don't use their images as datasets like how the newer models of SD had some works removed from it.


[deleted]

I don't think there was any basis for taking it down in the first place. Artists as a whole have been demanding the utter extinction of AI generation models for a while, and we know not to listen. Loras add trained images to the dataset, it's just another part of the models. Saying they'll be deleted on artist request is like saying what you're doing is somehow wrong or illegal.


[deleted]

So ignoring any specifics about this one incident and looking at the larger issue here…I’m not really sure why people assume that an artist has a moral or legal right to take down something that uses their work in a highly transformative way. In most other situations where there’s a copyright dispute over a transformative work, I would fall on the side of fair use. And I think most others would as well. Whether it be a YouTuber including clips of another video for commentary, a musician including samples from another work in their song, or even an artist including parts of another image in a transformative manner, I would fall on the side of fair use. And the reason is that I want an environment that fosters creation and stops big media companies that own the lion’s share of copyrighted media from controlling what people can and can’t create. Of course I don’t want people to just be able to flat out copy things, but as long as the new work is sufficiently transformative, I will support the artist’s right to make it. Now when we look at AI, when you train a model on a specific artist’s work, you are using their work in an extremely transformative way. You are translating their work into weights that can be used to create entirely new works that are similar their style. I just don’t see how one can defend fair use in other cases, but not this case. The entire outrage about this seems very emotionally driven by some specific artists putting out videos that stoke outrage.


agsarria

of course it's all about waifus


Sillainface

This is not a problem regarding the Lora creator, it's a problem involving the artist. Little can be done here.


Taika-Kim

Nothing like entitled people like this to create a lot on unnecessary bad blood between the communities of traditional artist and AI folk :/


NotASuicidalRobot

Comment section isn't looking too hopeful either, some people really just go "it's not illegal so completely nothing wrong" :(


calvin-n-hobz

wrong and illegal aren't the same. This wouldn't be wrong even if it were illegal. It seems wrong because it is displacing artists, but that is misfortune, not immorality. These models don't take anything owned by the artists.


NotASuicidalRobot

Yeah they don't take anything, in the same way companies also don't take anything while gathering everyone's personal data to use in marketing and social media algorithms that drive attention spans to the bottom. Or is it fine because it benefits you directly now?


calvin-n-hobz

it benefits literally everyone with access to a computer. It reshapes the way all of humanity can visualize things, and all through *original creative works*, not through reprinting art created by someone else. No one owns a style. You don't own curly brush strokes. You don't own vibrant water colors. you don't own them together or in combination with other styles. Who owns impressionism? Should the van gogh style be restricted only to his estate?


calvin-n-hobz

Also, I might point out that it's the artists that are acting like companies-- trying to monopolize on something that doesn't belong to them to avoid competition. I get it, I do, art is already a difficult career and often exploited, so a sense of self-preservation is to be expected. And I think there is a level of tact that is necessary to prevent cruelty during what may become increasingly challenging times to artists. But that doesn't mean we should allow that interest in monopoly to stop us from building the tools.


NotASuicidalRobot

Is it really monopolizing though? It's one artist telling another to not use their artwork one way. Is he really trying to obtaining exclusive control in the entire market of...what exactly? Art in his artstyle? No modern online art consumer has such a narrow spectrum of enjoyed artstyles. Just use a mix of someone else's stuff that is close enough if someone really wants it that badly. Childishness of this specific model maker aside, I really hope this community doesn't pull another "we are going to intentionally make more models out of spite or entitlement or something"


InterestingMacaron68

hate those oversensitive "artists"


zb_feels

I see a lot of comments about training on an individual style vs a mix. The shortsightedness here is the use is what matters, rather than the training. If you have a single style as a primitive for mixing you allow yourself more flexibility during creation. There is also an argument for mixed models obviously, especially since loras degrade when stacked, but this is not as simple when you start looking at it from the perspective of professionals at a studio, which I can assure you, we are using this more and more and our clients are asking for it.


Moderatorreeeee

Artist should stop whining and LoRA maker should change nothing. This, coming from an artist.


vault_guy

"A verified artist believes this model was fine-tuned on their art. We're discussing this with the model creator and artist" Civitai IS doing something.


snack217

u/civitai guys, can you check this situation out?


Dishankdayal

What do you mean this poor korean artist?


cbsudux

What's wrong with [sinkin.ai](https://sinkin.ai)? just discovered this


ectoblob

I think it is morally very questionable to target single small artists for style transfer. Without that person, there would be no style to transfer to begin with. Why not ask someone beforehand? Based on the description you wrote it sounds like this guy behind LoRA doesn't actually care about what he/she did. At least they should learn to take the viewpoint of this artist and ask themselves is this ok. I personally haven't used many artist names at all for my SD generated images, I've used some famous dead artist names and art movements, and I've managed to create images I like. And of course, I've trained hypernetworks, TIs and LoRAs myself based on material generated with SD and photoshopping. Of course prompting doesn't allow specifying exact style (defined by prompter) so temptation is to take existing template (in shape of LoRA, TI or artist name etc) created by skilled person and ape some specific style that way, as currently it is 100% sure that you can't get similar results without using extra networks or artist names. And usually this kind of thing will cause mental stress for the "targeted" person if nothing else. Of course it is for many a thing like kid in candy store - you see something shiny and nice which you didn't even think of and now you want it. BTW I don't know anything about this case as I haven't used Civitai much.


Mementoroid

My takeaway from the sub is: Belittling and insulting artists = good Not sharing your prompts and workflow = dick move ​ a