CyberDainz
CyberDainz t1_jdzu9e4 wrote
looks similar to "Cold Diffusion"
CyberDainz t1_jcfo8wk wrote
Reply to comment by lostmsu in [N] PyTorch 2.0: Our next generation release that is faster, more Pythonic and Dynamic as ever by [deleted]
exception:
Windows not yet supported for torch.compile
CyberDainz t1_jcfe382 wrote
Reply to [N] PyTorch 2.0: Our next generation release that is faster, more Pythonic and Dynamic as ever by [deleted]
torch.compile does not work in windows :(
CyberDainz t1_j8weqb8 wrote
Reply to [D] Lion , An Optimizer That Outperforms Adam - Symbolic Discovery of Optimization Algorithms by ExponentialCookie
so, technically this is binary optimizer that updates the weight to either -1 or +1 multiplied by lr. Should be tested with "Learning Rate Dropout", i.e. 30% chance to update with -1/+1, otherwise no update.
CyberDainz t1_j8eycl2 wrote
Reply to [R] DIGIFACE-1M — synthetic dataset with one million images for face recognition by t0ns0fph0t0ns
112x112 resolution. Completely useless in 2k23
CyberDainz t1_j715ayh wrote
use trainable normalization
self._in_beta = nn.parameter.Parameter( torch.Tensor(in_ch,), requires_grad=True)
self._in_gamma = nn.parameter.Parameter( torch.Tensor(in_ch,), requires_grad=True)
...
self._out_gamma = nn.parameter.Parameter( torch.Tensor(out_ch,), requires_grad=True)
self._out_beta = nn.parameter.Parameter( torch.Tensor(out_ch,), requires_grad=True)
...
x = x + self._in_beta[None,:,None,None]
x = x * self._in_gamma[None,:,None,None]
...
x = x * self._out_gamma[None,:,None,None]
x = x + self._out_beta[None,:,None,None]
       Â
CyberDainz t1_j3kv4f5 wrote
Reply to [D] Why is Vulkan as a backend not used in ML over some offshoot GPU specification? by I_will_delete_myself
ML is not only just the backend. Technically you can code and run ml programs on OpenCL or OpenGL, but speed will be at least x2-x4 worse than specialized backend like cuda / rocm.
It's all about tuning programs (such as matmul) for each GPU model to achieve maximum performance. CUDA/Rocm already contains tuned programs.
CyberDainz t1_j3ayd88 wrote
Reply to comment by IndieAIResearcher in [D] Best way to package Pytorch models as a standalone application by Atom_101
Because each project has its own configuration.
If you make a framework out of this, you get a horror like Bazel. And you will spend time learning how to work with Bazel. But what for? Building a project is a simple operation: create folders, some files, download, unzip, call Popen with a certain env, clean up the __ pycache __ folder at the end, and archive it - that's it! The project is ready.
All you need to do is spend 20 minutes and figure out how WindowsBuilder.py works and adapt it to your project.
WindowsBuilder.py is standalone and requires only python 3.
CyberDainz t1_j3ax5f5 wrote
Reply to comment by IndieAIResearcher in [D] Best way to package Pytorch models as a standalone application by Atom_101
what do you mean
CyberDainz t1_j3awj2p wrote
look at my project https://github.com/iperov/DeepFaceLive
I made a builder to create an all-in-one standalone folder for Windows that contains everything to run a python application containing cuda runtime. Release folder also contains portable VSCode that has already configured project to modify only folder's code. No conda, no docker and other redundant shit.
Builder located here https://github.com/iperov/DeepFaceLive/blob/master/build/windows/WindowsBuilder.py and can be expanded to suit your needs.
CyberDainz t1_iyvj7ti wrote
Reply to [D] PyTorch 2.0 Announcement by joshadel
so, with torch.compile
people can keep writing graph-unfriendly code with random dynamic shapes and direct python code over tensors ?
CyberDainz t1_je6qsbb wrote
Reply to [D] Improvements/alternatives to U-net for medical images segmentation? by viertys
The success of generalization for segmentation depends not only on the network configuration, but also on the augmentation and pretrain on non mask target.
try my new project Deep Roto https://iperov.github.io/DeepXTools/