Submitted by I_will_delete_myself t3_1068gl6 in MachineLearning
There is always needing specific things to GPUs. There is always a need to make a new thing over and over again for each new GPU. We got Cuda, Rocm, Metal, and will soon need Intel. I know there are already a lot of tools out there for Cuda which make it hard to replace. However for something like Apple devices (which Apple has a history of not giving a darn about compute unless if it's the iPhone or iPad). Then there is a ton of operations that have to get implemented and only CUDA is something you know will be reliably supported it seems. I am curious on your guys thoughts with why this ain't a thing in ML, even though game industry uses open standards like these all the time .
Edit: Shoot I just realized PyTorch was prototyping Vulcan as a backend. https://pytorch.org/tutorials/prototype/vulkan_workflow.html
suflaj t1_j3haky4 wrote
Why would it be used? It doesn't begin to compare to CUDA and cuDNN. Nothing really does. And Vulkan specifically is made for graphics pipelines, not for general purpose compute. To be cross compatible, it usually sends compute to be done on the CPU.
It's not that there is a consipiracy to use proprietary nvidia software - there just isn't anything better than it.