5death2moderation
5death2moderation t1_j63gfol wrote
Reply to [D] MusicLM: Generating Music From Text by carlthome
No code so who cares
5death2moderation t1_j3qzs9j wrote
Reply to [R] Diffusion language models by benanne
>as it has in perceptual domains, like audio
citation needed
5death2moderation t1_j2wrwrl wrote
Reply to comment by currentscurrents in [R] AMD Instinct MI25 | Machine Learning Setup on the Cheap! by zveroboy152
Tesla m40s and now p100s were 200 dollars a piece just four years after release. V100s have not depreciated as quickly though, presumably because the tensor cores keep their performance competitive. I would assume a100s will suffer the same fate of being very expensive for many years to come sadly.
5death2moderation t1_iumr1v1 wrote
As someone who actually owns an M1 and has a job running large models in the cloud - it's not nearly as bad as I was expecting. mps support in pytorch is growing every day, most recently I have been able to finetune various sentence transformers and GPT-J at reasonable speeds (before pushing to gpus in the cloud). If I was choosing the laptop I would go with linux + gpu obviously, but our mostly clueless executive chose the M1. The upside with the M1 is that I can use the 64gb of system memory for loading models whereas the most gpu memory I could get in a nvidia laptop is 16-24.
5death2moderation t1_j6my8yj wrote
Reply to comment by currentscurrents in [D] What's stopping you from working on speech and voice? by jiamengial
It is out and it's 3x more expensive than it's A100 equivalent was 2 years ago. The prices are not going down for a very long time, probably not until the next generation is out.