Highlights
- Arctic Code Vault Contributor
Create your own GitHub profile
Sign up for your own profile on GitHub, the best place to host code, manage projects, and build software alongside 50 million developers.
Sign up
Pinned
1,087 contributions in the last year
Activity overview
Contribution activity
September 2020
Created a pull request in pytorch/pytorch that received 8 comments
[quant][graphmode][fx] Support quantization for standalone module
Stack from ghstack: #45292 [quant][graphmode][fx] Merge all quantization mode #45343 [quant] Use PlaceholderObserver as default dynamic quant obse…
+338
−28
•
8
comments
- [quant] Use PlaceholderObserver as default dynamic quant observer
- [quant] Remove unused qconfig argument in qat modules
- [quant][graphmode][fx] Merge all quantization mode
- [fx] GraphModule copy top level attributes from root
- [quant][graphmode][fx] qconfig_dict support more types of configurations
- [quant][eagermode] Custom module support
- [quant][graphmode][fx] Custom module support
- [quant][graphmode][jit] Some fixes
- [quant][graphmode][fx] Support fp16 dynamic quantization for linear
- [quant] Support clone for per channel affine quantized tensor
- [quant][graphmode][fx][fix] Remove qconfig in convert
- [quant][graphmode][fx][fix] Support None qconfig in convert
- [quant][graphmode][fx][fix] Support dictionary output
- [quant][graphmode][jit][api] Expose preserved_attrs from finalize to convert_jit
- [quant][graphmode][fx] Support quantize per channel in all cases
- [quant][eagermode][refactor] Add set/get method for quantization and fusion mappings
- [quant][graphmode][fx][api] Call fuse in prepare
- [quant][graphmode][fx] Support inplace option
- [quant][graphmode][fx] Support dynamic quantization without calibration
- [quant][graphmode][fx] Support quantization for standalone module
- SyncBN: preserve qconfig if it exists
- quant bn/gn/in: move scale and zp to buffers
- quant docs: document how to customize qconfigs in eager mode
- quant docs: add reduce_range explanatation to top level doc
- Clear shape information before finalizing graph-mode quantization
- Quantization: combine previous summary with new summary
- Quantization: add API summary section
- [quant][fx][bug] Fix error in convert step for QAT
- [quant] Fix ConvTranspose mapping
- [quant] Refactoring the mappings files
- [quant][qat] Ensure fake_quant and observer can be disabled on scriptmodule
- [quant][fx] Add node name as prefix to observer module name
- [quant] creating quint4x2 dtype for quantized tensors
- [quant][qat] Ensure observers and fq modules are scriptable
- qat conv_fused.py: one more patch for forward compatibility
- [quant] Support clone for per channel affine quantized tensor
- [quant] Fixing the output shape for the linear
- removing conv filters from conv pattern matching
- fx quant: clarify state in Quantizer object
- Fix replaceAtenConvolution for BC.
- fx quant: add docblocks to _find_matches and _find_quants
- Ensure that LSTM and RNNCells run with reduced range for activations
Created an issue in pytorch/pytorch that received 2 comments
for loop can't be symbolically traced
def get_additional_output_info( self, encoded_feature: Dict[str, torch.Tensor] ) -> Dict[str, torch.Tensor]: additional_info: Dict[str, torch.Tensor…
2
comments