Neural Network inference on a "3-cent" 8 bit Microcontroller

Bouyed by the surprisingly good performance of neural networks with quantization aware training on the CH32V003, I wondered how far this can be pushed. How much can we compress a neural network whi…

Read more here: External Link