5death2moderation

5death2moderation t1_j2wrwrl wrote

Tesla m40s and now p100s were 200 dollars a piece just four years after release. V100s have not depreciated as quickly though, presumably because the tensor cores keep their performance competitive. I would assume a100s will suffer the same fate of being very expensive for many years to come sadly.

6

5death2moderation t1_iumr1v1 wrote

As someone who actually owns an M1 and has a job running large models in the cloud - it's not nearly as bad as I was expecting. mps support in pytorch is growing every day, most recently I have been able to finetune various sentence transformers and GPT-J at reasonable speeds (before pushing to gpus in the cloud). If I was choosing the laptop I would go with linux + gpu obviously, but our mostly clueless executive chose the M1. The upside with the M1 is that I can use the 64gb of system memory for loading models whereas the most gpu memory I could get in a nvidia laptop is 16-24.

1