Submitted by hardmaru t3_z36n5j in MachineLearning
hadaev t1_ixm0qnw wrote
I like how community overreact because some prompts have reduced quality (probably due to the new text encoder) and accuse of censorship.
my-sunrise t1_ixmlxfq wrote
They’ve specifically said they’re censoring the model here on Reddit multiple times. Not sure why'd you assume they wouldn’t considering the legal issues they’re facing.
hadaev t1_ixn5o10 wrote
"accuse of censorship" was about worst artists styles prompts.
And gived how some artists whined about model, some peoples on stable diffusion subbredit started conspiracy about due "legal issues they’re facing" they removed (censored) some artists from data and gave us lobotomized model.
Which probably doesnt happened to my opinion, gived they said they changed text encoder.
sam__izdat t1_ixneldl wrote
> conspiracy about due "legal issues they’re facing"
No, they might be a bunch of mewling toddlers, but that's not a conspiracy theory. There was a lot of corporate and legislative pressure to remove objectionable content, so it appears they mostly removed human anatomy, weapons, certain contemporary artists, celebrity faces, etc. The problem with that, I expect, is that LAION's dataset is already just awful -- and you're cutting into some of the better data you have available.
hadaev t1_ixnh0ro wrote
>so it appears they mostly removed human anatomy, weapons, certain contemporary artists, celebrity faces, etc.
Ah, appears.
How many data samples you tested for this conclusion?
sam__izdat t1_ixnhbrh wrote
I'm just going by what I've seen people try to produce and say, so far. I haven't done any extensive testing, partly because I'm using an ancient Tesla GPU and they broke FP32.
4name25 t1_ixnikga wrote
I run SD with R5 m330 :o
sam__izdat t1_ixnio4j wrote
Solidarity.
hadaev t1_ixnrhn4 wrote
Colab.
But yeah, usually such big models are tested on huge scales.
Some cherry picked comparisons with tens samples shows nothing.
Flag_Red t1_ixmnz5l wrote
The model is censored for NSFW content, they explain that clearly in the model cards on Huggingface.
Emad also confirmed a couple of hours ago on Discord that although most artist's styles weren't explicitly removed from the training set, they were never in the training set in the first place. The only reason v1 understood "Greg Rutkowski", etc. is because they were included in Clip's training set, which was trained by OpenAI. Finer control of what the model does and doesn't understand is the main reason they switched to a new text encoder.
hadaev t1_ixn4rus wrote
>The model is censored for NSFW content
I mean not related to porn things like greg rudkowski prompt.
>is because they were included in Clip's training set
Basically what i said.
Viewing a single comment thread. View all comments