Pinned issues
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
quantization.keras.quantize_model function runtime error with mobilenet v3
bug
#546
opened Sep 15, 2020 by
ejcgt
Script for training example models(MobileNetV1, ResNet, MobileNetv2)
feature request
#535
opened Sep 9, 2020 by
debapriyamaji
Make TFMOT Clustering Model Files Available
feature request
technique:clustering
#530
opened Sep 1, 2020 by
alanchiao
what's difference between tfmot and 'Graph Transform Tool'
bug
#528
opened Aug 24, 2020 by
ZongqiangZhang
How to preform progressive quantization aware training using this framework?
bug
#525
opened Aug 20, 2020 by
joyalbin
Mismatch in number of weights when loading quantized model (activation layer)
bug
#524
opened Aug 20, 2020 by
Lotte1990
CONV2D, DENSE layer with default QAT can't generate correct int8 quantized nodes in TFLite model
bug
#523
opened Aug 19, 2020 by
psunn
Quantization-Aware-Training with Sparsity and Clustering Preservation
feature request
technique:all
#513
opened Aug 10, 2020 by
LesBellArm
Optimizing models that use TensorFlow Addons activations, layers, etc
feature request
technique:all
#471
opened Jul 21, 2020 by
willbattel
Can't de serialize PolynomialDecay pruning schedule (and potentially others
bug
technique:pruning
#456
opened Jul 12, 2020 by
alessandroaimar
How to user quantize to imporve inference performance on tensorflow-serving?
bug
technique:qat
#450
opened Jul 2, 2020 by
ZhiyiLan
Model Optimize unable to get Tensorflow version
bug
technique:all
#448
opened Jul 1, 2020 by
somyamohanty
Spurious Dequantize/Cast/Quantize sequence of ops at the end of a QAT TFLite model.
bug
technique:qat
#431
opened Jun 18, 2020 by
corvoysier
Getting an error when creating the .tflite file
bug
technique:qat
#412
opened May 30, 2020 by
marjanemd
sparsity.prune_low_magnitude fails with mixed precision policy mixed_float16
feature request
technique:pruning
#409
opened May 28, 2020 by
dd1923
Behaviour of stripped models and sequence masking
bug
technique:pruning
#403
opened May 21, 2020 by
captainproton1971
H5 to Pb Conversion with Fake Quantization Node Fails
bug
technique:qat
#381
opened May 11, 2020 by
anidh
QAT (quantization aware training) Support quantizing models recursively
feature request
technique:qat
#377
opened May 4, 2020 by
CRosero
Previous Next
ProTip!
Updated in the last three days: updated:>2020-09-14.