Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
-
Updated
Jun 2, 2023 - Python
Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
PaddleSlim is an open-source library for deep model compression and architecture search.
Intel® Neural Compressor (formerly known as Intel® Low Precision Optimization Tool), targeting to provide unified APIs for network compression technologies, such as low precision quantization, sparsity, pruning, knowledge distillation, across different deep learning frameworks to pursue optimal inference performance.
Network Slimming (Pytorch) (ICCV 2017)
Neural Network Compression Framework for enhanced OpenVINO™ inference
Caffe for Sparse and Low-rank Deep Neural Networks
Reference ImageNet implementation of SelecSLS CNN architecture proposed in the SIGGRAPH 2020 paper "XNect: Real-time Multi-Person 3D Motion Capture with a Single RGB Camera". The repository also includes code for pruning the model based on implicit sparsity emerging from adaptive gradient descent methods, as detailed in the CVPR 2019 paper "On i…
[CVPR 2021] Exploring Sparsity in Image Super-Resolution for Efficient Inference
Sparse Optimisation Research Code
Always sparse. Never dense. But never say never. A Sparse Training repository for the Adaptive Sparse Connectivity concept and its algorithmic instantiation, i.e. Sparse Evolutionary Training, to boost Deep Learning scalability on various aspects (e.g. memory and computational time efficiency, representation and generalization power).
Sparse and structured neural attention mechanisms
Learning both Weights and Connections for Efficient Neural Networks https://arxiv.org/abs/1506.02626
A research library for pytorch-based neural network pruning, compression, and more.
OpenVINO Training Extensions Object Detection
Soft Threshold Weight Reparameterization for Learnable Sparsity
Sparse Recurrent Neural Networks -- Pruning Connections and Hidden Sizes (TensorFlow)
Ordered Weighted L1 regularization for classification and regression in Python
[ICML 2021] "Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training" by Shiwei Liu, Lu Yin, Decebal Constantin Mocanu, Mykola Pechenizkiy
Add a description, image, and links to the sparsity topic page so that developers can more easily learn about it.
To associate your repository with the sparsity topic, visit your repo's landing page and select "manage topics."