Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://nervanasystems.github.io/distiller
deep-neural-networks
jupyter-notebook
pytorch
regularization
pruning
quantization
group-lasso
distillation
onnx
truncated-svd
network-compression
pruning-structures
early-exit
automl-for-compression
-
Updated
Jul 23, 2020 - Jupyter Notebook
The idea is to have a more advanced Filter Pruning method to be able to show SOTA results in model compression/optimization.
I suggest reimplementing the method from here: https://github.com/cmu-enyac/LeGR and reproduce baseline results for MobileNet v2 on CIFAR100 as the first step.
cc'ed @vshampor, @vanyalzr.