LetterRip t1_izdm55i wrote
Reply to comment by cloneofsimo in [P] Using LoRA to efficiently fine-tune diffusion models. Output model less than 4MB, two times faster to train, with better performance. (Again, with Stable Diffusion) by cloneofsimo
> Glad it worked for you with such small memory constraints!
Currently training image size 768, and accumulation steps=2.
If steps is set to 2000, will it be going to 4000? It didn't stop at 2000 as expected and is currently over 3500, figured I'd wait till over 4000 to kill it in case the accumulation steps acts as a multiplier. (Went to 3718 and quit, right after I wrote the above).
Teotz t1_izjzdve wrote
Don't leave us hanging!!! :)
How did the training go with a person?
LetterRip t1_izksf4k wrote
It is working, but I need to use prior preservation loss, otherwise all of the words in the phrase have the concept bleed into them. So generating photos for preservation loss now.
LetterRip t1_izm8rkq wrote
It did work, now I can no longer launch lora training even with 768 or 512 (CUDA VRAM exceeded), only 256 no idea what changed.
JanssonsFrestelse t1_j0l89ve wrote
Same here with 8GB VRAM, although looks like I can't use mixed_precision=fp16 with my RTX 2070, so that might be why.
Viewing a single comment thread. View all comments