Viewing a single comment thread. View all comments

Star-Bandit t1_ix6l9wf wrote

You might also check some old server stuff, I have a Dell R720 running two Tesla K80's which is essentially the equivalent of 2 1080s per card. While it may not be the latest and greatest, the server ran me $300 and the two cards ran me $160 from eBay.

3

C0demunkee t1_ix85rbq wrote

I did this with a M40 24gb, super cheap, no video out, lots of cuda cores, does all the ML/AI stuff I want it to do.

2

Star-Bandit t1_ix9toom wrote

Interesting, to I'll have to look into the specs of the M40, have you had any issues with running out of space with vram? All my models seem to gobble it up, though I've done almost no optimizations since I've just recently gotten into ML stuff

2

C0demunkee t1_ixd9fdq wrote

yeah you can easily use it all up from both image scale and batch size. Also some models are a bit heavy and don't leave any for the actual generation.

Try "pruned" models, they are smaller.

since the training sets are all on 512x512 images it makes the most sense to generate at that res and then upscale.

1