Skip to content
#

model-compression

Here are 165 public repositories matching this topic...

nni
pkubik
pkubik commented Mar 14, 2022

Describe the issue:
During computing Channel Dependencies reshape_break_channel_dependency does following code to ensure that the number of input channels equals the number of output channels:

in_shape = op_node.auxiliary['in_shape']
out_shape = op_node.auxiliary['out_shape']
in_channel = in_shape[1]
out_channel = out_shape[1]
return in_channel != out_channel

This is correct

bug help wanted good first issue model compression

micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape

  • Updated Oct 6, 2021
  • Python

A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures, 3.) Model Compression, Quantization and Acceleration, 4.) Hyperparameter Optimization, 5.) Automated Feature Engineering.

  • Updated Jun 19, 2021
avishreekh
avishreekh commented May 7, 2021

We also need to benchmark the Lottery-tickets Pruning algorithm and the Quantization algorithms. The models used for this would be the student networks discussed in #105 (ResNet18, MobileNet v2, Quantization v2).

Pruning (benchmark upto 40, 50 and 60 % pruned weights)

  • Lottery Tickets

Quantization

  • Static
  • QAT
help wanted good first issue Priority: High

Improve this page

Add a description, image, and links to the model-compression topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the model-compression topic, visit your repo's landing page and select "manage topics."

Learn more