Teotz t1_izjzdve wrote
Reply to comment by LetterRip in [P] Using LoRA to efficiently fine-tune diffusion models. Output model less than 4MB, two times faster to train, with better performance. (Again, with Stable Diffusion) by cloneofsimo
Don't leave us hanging!!! :)
How did the training go with a person?
LetterRip t1_izksf4k wrote
It is working, but I need to use prior preservation loss, otherwise all of the words in the phrase have the concept bleed into them. So generating photos for preservation loss now.
LetterRip t1_izm8rkq wrote
It did work, now I can no longer launch lora training even with 768 or 512 (CUDA VRAM exceeded), only 256 no idea what changed.
JanssonsFrestelse t1_j0l89ve wrote
Same here with 8GB VRAM, although looks like I can't use mixed_precision=fp16 with my RTX 2070, so that might be why.
Viewing a single comment thread. View all comments