Here are
21 public repositories
matching this topic...
Toolkit for efficient experimentation with Speech Recognition, Text2Speech and NLP
Updated
May 11, 2021
Python
[CVPR 2019, Oral] HAQ: Hardware-Aware Automated Quantization with Mixed Precision
Updated
Feb 26, 2021
Python
Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.
Updated
May 8, 2021
Python
Simple Pose: Rethinking and Improving a Bottom-up Approach for Multi-Person Pose Estimation
Updated
May 23, 2022
Python
Training with FP16 weights in PyTorch
Updated
Aug 7, 2019
Python
High Resolution Style Transfer in PyTorch with Color Control and Mixed Precision 🎨
Updated
Aug 7, 2022
Python
A tool for debugging and assessing floating point precision and reproducibility.
基于tensorflow1.x的预训练模型调用,支持单机多卡、梯度累积,XLA加速,混合精度。可灵活训练、验证、预测。
Updated
Aug 16, 2021
Python
BitPack is a practical tool to efficiently save ultra-low precision/mixed-precision quantized models.
Updated
Feb 10, 2021
Python
PyCon SG 2019 Tutorial: Optimizing TensorFlow Performance
Updated
Nov 20, 2019
Jupyter Notebook
Extremely simple and understandable GPT2 implementation with minor tweaks
Updated
Dec 6, 2019
Python
This repository contains notebooks showing how to perform mixed precision training in tf.keras 2.0
Updated
Dec 15, 2019
Jupyter Notebook
An implementation of HPL-AI Mixed-Precision Benchmark based on hpl-2.3
Let's train CIFAR 10 Pytorch with Half-Precision!
Updated
Oct 25, 2019
Python
🎯 Accumulated Gradients for TensorFlow 2
Updated
Jun 27, 2022
Python
<케라스 창시자에게 배우는 딥러닝 2판> 도서의 코드 저장소
Updated
Jul 13, 2022
Jupyter Notebook
Hybrid-Precision Analysis on CG Solver (H.A.C.S). Merging single and double precision to generate a fast yet accurate CG solver
PyTorch RNet implementation with Distributed and Mixed-Precision training support.
Updated
Apr 18, 2019
Python
Deep learning solution for Cassava Leaf Disease Classification, a Kaggle's Research Code Competition using Tensorflow.
Updated
Apr 23, 2021
Jupyter Notebook
A Post-Training Quantizer for the Design of Mixed Low-Precision DNNs with Dynamic Fixed-Point Representation for Efficient Hardware Acceleration on Edge Devices
Improve this page
Add a description, image, and links to the
mixed-precision
topic page so that developers can more easily learn about it.
Curate this topic
Add this topic to your repo
To associate your repository with the
mixed-precision
topic, visit your repo's landing page and select "manage topics."
Learn more
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.