Submitted by Dr_Singularity t3_z37m9h in singularity
Comments
RikerT_USS_Lolipop t1_ixlpbgb wrote
> half
You are wildly underestimating that figure.
[deleted] t1_ixlk9my wrote
[deleted]
blueSGL t1_ixm6fbk wrote
> Kinda worried SD will regress into something that will need dedicated tweaked models for everything.
honestly I'd far prefer them not have any legal issues and deliver solid bases for fine tunes. (the initial training is the really expensive bit)
The community surrounding SD is a resourceful bunch and being able to train forward from a high quality (but censored) base is better than from a low quality (but uncensored) base.
Just look at all the work that's being done with LLMs where a curated dataset gives better results than a large uncurated one.
rixtil41 t1_ixmsjdz wrote
As long as this doesn't have real impact or cause the quality of results to be pushed back by years than I'm ok.
elvenrunelord t1_ixkrihc wrote
Looks really impressive. What I don't understand at first glance is how to set up a local instance of this software. If it will even run on a PC
Masark t1_ixkxzp6 wrote
-
yes, it runs on standard PCs. You'll need at least 8GB of RAM. Preferably, you want a recent GPU with at least 4GB VRAM, but it can technically run (very slowly) on CPU.
-
Stable-diffusion-ui is about the most simple way to get it and provides a nice browser-based GUI. Not sure if it's running with this new 2.0 release yet, but if it isn't, it should be available soon.
fastinguy11 t1_ixl1vzt wrote
you actually need 11 gb now apparently for the v2 model
[deleted] t1_ixlj7yk wrote
[deleted]
kasiotuo t1_ixlc7c4 wrote
Oh noo I can't run it anymore then, even tho I have a 3070.. soldered RAM yei
Bluestripedshirt t1_ixl3vkx wrote
Yup. Not working with 8 on my MacBook.
TheRidgeAndTheLadder t1_ixl3m77 wrote
I'll be back this time tomorrow to turn it into a docker container if that would be useful for anyone
elvenrunelord t1_ixlbwlr wrote
Thanks. I got a rig that can run that then. :)
[deleted] t1_ixl1hpr wrote
Aitrepeneur on YouTube got great tutorials on things surrounding it.
Akimbo333 t1_ixl5btn wrote
What's the difference between this and the others?
-ZeroRelevance- t1_ixlibbg wrote
Basically just bigger and better than the previous ones afaik. The only really notable change I saw was that it has a new depth-detection model for more consistent variations.
Akimbo333 t1_ixm00kh wrote
Oh ok. Are the hands and faces better?
-ZeroRelevance- t1_ixm0kfw wrote
Faces look better, hands still look pretty bad though. There’s some sample images in the linked post if you want to have a look, and there should be some on r/stablediffusion now too.
Akimbo333 t1_ixm1wjn wrote
Thanks! But I also heard that the model was pretty regressive as fuck! Because it Filtered out nudity Celebrities And artist styles
-ZeroRelevance- t1_ixm2x7d wrote
The nudity stuff doesn’t really matter, since it will definitely be recreated with custom models anyways, but I didn’t realise about the celebrities and artists. That will definitely be a big blow to the model, since celebrities are in a lot of the prompts that people will try for the first time, and artists are a great way to guide images into certain styles. Hopefully the community can resolve those limitations, but you’re right that it’s pretty limiting.
Also, if they removed a bunch of artists from the dataset, that means removing a massive amount of high-quality training data, which likely has significantly reduced the potential of the model. Looks like a bad move from every side but a PR one.
Akimbo333 t1_ixmannh wrote
Oh yeah I agree!!!
Chemical_Cobbler438 t1_ixkiak7 wrote
can this even draw fingers?
NTIASAAHMLGTTUD t1_ixkql66 wrote
Can you?
Rumianti6 t1_ixkt5gs wrote
I can. Even SD2 is still pretty subpar.
Strange_Vagrant t1_ixl47ye wrote
A flippant response to a flippant response to a flippant OP. I get what you're all saying (AI art can be rough around the edges) but the underlying reality that the hand drawings aren't what's critical here makes both comments so disposable.
You may be a talented artist but your craft will be fundamentally changing over the next year. Concerns about details (such as initial hand drawing) will butt up against the reality of customer expectations. Many paying customers don't really care about the nuances you learnt in your education/experience.
They want a cheap, quick, and good render of thier idea. The classic quality/cost/time triangle is collapsing into a singular dimension of quality where the distance between what weeks of what an experienced and trained expert and a couple of minutes mucking about with a prompt and slide bars can do is quickly closing.
Baron_Samedi_ t1_ixoekz6 wrote
Yes, lots of us can draw hands. It just takes a little practice.
Art students can learn passable hands within a semester.
Honestly, if you already know how to create digital art, there are so many existing resources for bashing together exactly what you want quickly and efficiently that the hype suggesting SD is going to eat everything is just boring nonsense.
Art AIs are impressive, but they are still quite limited in what they are genuinely useful for.
blueSGL t1_ixkwzwv wrote
need to wait for someone to make a 'negative prompt' text embedding for v2.
so a token for a vector that points towards undesirable areas in latent space where fuck up fingers live, and you use this as a negative prompt to drive your desired prompt vector further away from that point in latent space (I don't know about anyone else but trying to conceptualize higher dimensional spaces is really troublesome)
Black_RL t1_ixm1tww wrote
I wonder how it does hands and flags now.
Go science/tech!
policemenconnoisseur t1_ixlr41m wrote
I'm absolutely blown away by the quality of those images.
Kinexity t1_ixloqf6 wrote
They even filtered out NSFW. NSFW was why like half of the users use SD v1 in the first place.