Submitted by Nerveregenerator t3_z0msvy in deeplearning
Ok, so im considering upgrading my deep learning PC. Im currently using a 1080Ti. From my perspective, it is still a relatively solid card, and can be picked up on ebay for 200 bucks. So my question is, would I be better off using 4 1080Ti's or 1 3090? These should be reasonably similar in price. Also, im aware I will need a cpu that can handle this, so I suppose if you guys have any suggestions on a motherboard and cpu that can keep 4 1080s full of tensors, that would be helpful too. I cant seem to find a straight answer on why this setup isn't more popular, because the cost/performance ratio for the 1080's seems great..
Thanks
​
​
EDIT
- so sounds like a 3090 will be the best move to avoid complexities associated with multiple GPUs. What do you guys think if there was a pip package that allowed you to benchmark your setup for deep learning and then you could compare results to other users? Would that be something you would be interested in?
scraper01 t1_ix6t386 wrote
Four 1080 ti will get you the performance of a single 3090 if you are not using mixed precision. Once tensor cores are enabled, difference is night and day. Training and inference, a single 3090 will blow your multi GPU rig out of the water. On top of that, you'll need a motherboard plus a CPU with lots of PCIE lanes, and those ain't cheap. Pro grade stuff with enough lanes will be north of 10k. Not worth it.