-
Updated
Jan 20, 2021 - Jupyter Notebook
#
pruning
Here are 201 public repositories matching this topic...
Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller
deep-neural-networks
jupyter-notebook
pytorch
regularization
pruning
quantization
group-lasso
distillation
onnx
truncated-svd
network-compression
pruning-structures
early-exit
automl-for-compression
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
pytorch
pruning
convolutional-networks
quantization
xnor-net
tensorrt
model-compression
bnn
neuromorphic-computing
group-convolution
onnx
network-in-network
tensorrt-int8-python
dorefa
twn
network-slimming
integer-arithmetic-only
quantization-aware-training
post-training-quantization
batch-normalization-fuse
-
Updated
Feb 1, 2021 - Python
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
machine-learning
sparsity
compression
deep-learning
tensorflow
optimization
keras
ml
pruning
quantization
model-compression
quantized-training
quantized-neural-networks
quantized-networks
-
Updated
Feb 4, 2021 - Python
A curated list of neural network pruning resources.
-
Updated
Dec 4, 2020
PyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference
-
Updated
Jul 12, 2019 - Python
PaddleSlim is an open-source library for deep model compression and architecture search.
pruning
quantization
nas
knowledge-distillation
evolution-strategy
model-compression
neural-architecture-search
hyperparameter-search
autodl
-
Updated
Feb 4, 2021 - Python
Embedded and mobile deep learning research resources
deep-neural-networks
deep-learning
inference
pruning
quantization
neural-network-compression
mobile-deep-learning
embedded-ai
efficient-neural-networks
mobile-ai
mobile-inference
-
Updated
Oct 25, 2019
Pruning and other network surgery for trained Keras models.
-
Updated
Sep 21, 2020 - Python
Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration (CVPR 2019 Oral)
-
Updated
Sep 10, 2019 - Python
Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks
-
Updated
Oct 2, 2019 - Python
YOLO ModelCompression MultidatasetTraining
-
Updated
Feb 3, 2021 - Python
Reference ImageNet implementation of SelecSLS CNN architecture proposed in the SIGGRAPH 2020 paper "XNect: Real-time Multi-Person 3D Motion Capture with a Single RGB Camera". The repository also includes code for pruning the model based on implicit sparsity emerging from adaptive gradient descent methods, as detailed in the CVPR 2019 paper "On implicit filter level sparsity in Convolutional Neural Networks".
sparsity
deep-learning
efficient
cnn
pytorch
imagenet
pruning
siggraph
pytorch-implementation
cvpr2019
efficient-architectures
-
Updated
Jul 23, 2020 - Python
Infrastructures™ for Machine Learning Training/Inference in Production.
kubernetes
machine-learning
apache-spark
deep-learning
artificial-intelligence
awesome-list
pruning
quantization
knowledge-distillation
deep-learning-framework
model-compression
apache-arrow
federated-learning
machine-learning-systems
apache-mesos
-
Updated
May 24, 2019
Observations and notes to understand the workings of neural network models and other thought experiments using Tensorflow
neural-network
generative-adversarial-network
generative-model
pruning
optimal-brain-damage
uncertainty-neural-networks
-
Updated
Oct 27, 2019 - Jupyter Notebook
AlexKoff88
commented
Aug 21, 2020
The idea is to have a more advanced Filter Pruning method to be able to show SOTA results in model compression/optimization.
I suggest reimplementing the method from here: https://github.com/cmu-enyac/LeGR and reproduce baseline results for MobileNet v2 on CIFAR100 as the first step.
A pytorch pruning toolkit for structured neural network pruning and layer dependency maintaining.
-
Updated
Feb 3, 2021 - Python
Awesome machine learning model compression research papers, tools, and learning material.
-
Updated
May 10, 2020
Pytorch implementation of our CVPR 2020 (Oral) -- HRank: Filter Pruning using High-Rank Feature Map
-
Updated
Oct 14, 2020 - Python
This repository contains a Pytorch implementation of the paper "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks" by Jonathan Frankle and Michael Carbin that can be easily adapted to any model/dataset.
python
deep-learning
pytorch
pruning
lottery
network-pruning
pytorch-implementation
iclr2019
lottery-ticket-hypothesis
winning-ticket
-
Updated
Nov 3, 2019 - Python
Open
Docs
4
Het-Shah
opened
Aug 24, 2020
Open
Restructuring KD_Lib
3
Open
Update README.rst
10
mobilev2-yolov5s剪枝、蒸馏,支持ncnn,tensorRT部署。ultra-light but better performence!
-
Updated
Feb 3, 2021 - Jupyter Notebook
PyTorch Model Compression
-
Updated
Dec 23, 2020 - Python
A curated list of awesome edge machine learning resources, including research papers, inference engines, challenges, books, meetups and others.
iot
edge
awesome-list
pruning
quantization
auto-ml
edge-machine-learning
federated-learning
embedded-machine-learning
mobile-machine-learning
efficient-architectures
edge-deep-learning
-
Updated
Apr 18, 2020 - Python
Filter Grafting for Deep Neural Networks(CVPR 2020)
-
Updated
Jan 15, 2021 - Python
[NeurIPS 2020] Code release for paper "Deep Multimodal Fusion by Channel Exchanging" (In PyTorch)
pruning
rgbd
semantic-segmentation
inpainting
image-translation
depth-estimation
refinenet
multimodal-fusion
neurips2020
nyudv2
channel-exchanging-network
-
Updated
Jan 1, 2021 - Python
I demonstrate how to compress a neural network using pruning in tensorflow.
-
Updated
Aug 2, 2017 - Python
This repository contains notebooks that show the usage of TensorFlow Lite for quantizing deep neural networks.
pruning
tensorflow-lite
tensorflow-2
on-device-ml
model-quantization
model-optimization
quantization-aware-training
post-training-quantization
-
Updated
Jan 9, 2021 - Jupyter Notebook
Improve this page
Add a description, image, and links to the pruning topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the pruning topic, visit your repo's landing page and select "manage topics."
If you are interested in working on this issue - please indicate via a comment on this issue. It should be possible for us to pair you up with an existing contributor to help you get started.
From a complexity perspective, this ticket is at an ea