Viewing a single comment thread. View all comments

--dany-- t1_iy1zq6n wrote

3090 has NVLink bridge to connect two cards to pool memories together. Theoretically you’ll have 2x computing power and 48GB VRAM to do the job. If VRAM size is important for your big model and you have a beefy PSU then this is the way to go. Otherwise just go with a 4090.

If you don’t need to train a model frequently, colab or some paid gpu rental services might be easier for your wallet and power bill. For example it’s only about $2 per hour to rent 4x RTX A6000 from some rentals.

5

CKtalon t1_iy2n56h wrote

NVLink doesn’t pool VRAM no matter what Nvidia’s marketing says. I have NVLink. It just doesn’t.

8

somebodyenjoy OP t1_iy20svg wrote

Hi, thanks for your reply. So 2 3090s will be faster than one 4090, correct?

1

--dany-- t1_iy2149l wrote

Not too much by some benchmarks. So speed is not your point here. Your main concern is if the model and training data can fit your VRAM.

6

somebodyenjoy OP t1_iy23k2m wrote

I do hyperparameter tuning too, so the same model will have to train multiple times. More times the better, as I can try more architectures. So speed is important. But you’re saying that 4090 is not much better than 3090 in terms of speed huh

1

--dany-- t1_iy29wqs wrote

I’m saying 2x 3090s are not much better than a 4090. According to lambda labs benchmarks a 4090 is about 1.3 to 1.9 times faster than a 3090. If you’re after speed then a 4090 definitely makes more sense as it’s only slightly slower but is much more power efficient and cheaper than 2x 3090s.

4