ML4Bratwurst t1_jbyzell wrote
Can't wait for the 1 bit quantization
Dendriform1491 t1_jbzj7zu wrote
Wait until you hear about the 1/2 bit.
currentscurrents t1_jc03yjr wrote
You could pack more bits in your bit with in-memory compression. You'd need hardware support for decompression inside the processor core.
Dendriform1491 t1_jc0bgxd wrote
Or make it data free altogether
[deleted] t1_jc02vok wrote
[deleted]
Upstairs_Suit_9464 t1_jbz8dyt wrote
I have to ask… is this a joke or are people actually working on digitizing trained networks?
kkg_scorpio t1_jbz91de wrote
Check out the terms "quantization aware training" and "post training quantization".
8-bit, 4-bit, 2-bit, hell even 1-bit inference are scenarios which are extremely relevant for edge devices.
Taenk t1_jbzaeau wrote
Isn't 1-bit quantisation qualitatively different as you can do optimizations only available if the parameters are fully binary?
AsIAm t1_jc168cw wrote
It is. But that doesn't mean 1-bit neural nets are impossible. Even Turing himself toyed with such networks – https://www.npl.co.uk/getattachment/about-us/History/Famous-faces/Alan-Turing/80916595-Intelligent-Machinery.pdf?lang=en-GB
[deleted] t1_jbztbxc wrote
[removed]
Viewing a single comment thread. View all comments