Create your own GitHub profile
Sign up for your own profile on GitHub, the best place to host code, manage projects, and build software alongside 50 million developers.
Sign upHighlights
- Arctic Code Vault Contributor
Pinned
812 contributions in the last year
Activity overview
Contributed to
Tencent/TurboTransformers,
feifeibear/test-ci,
TurboNLP/Translate-Demo
and 5 other
repositories
Contribution activity
August 2020
feifeibear has no activity
yet for this period.
July 2020
Created a pull request in Tencent/TurboTransformers that received 2 comments
- Jiaruifang/fix onnxrt docker
- Jiaruifang/fix onnxrt return bug
- Jiaruifang/albert example
- Jiaruifang/onnxrt gpt
- Jiaruifang/onnxrt as turbo
- Jiaruifang/cpu variable len benchmark
- Rewrite cpu dev dockerfile. Solve mkl slow problems.
- Jiaruifang/random benchmark
- Jiaruifang/onnx benchmark opt
- Jiaruifang/fix allocator bug
- uses cub
- Develop
- More information in bert_example.py
- readme dockerhub image to lastest
- Jiaruifang/gpu docker v0.3.2
- Develop
- Jiaruifang/v0.3.1 polish docker
- Develop
- Jiaruifang/multi model benchmark
- Jiaruifang/reberta tunr
- Jiaruifang/roberta
- Jiaruifang/albert model
- Jiaruifang/polish develop
- Jiaruifang/polish albert
- Jiaruifang/albert
- Fix/bert example bug
- fix bert model cpp output
- Add Qbertmodel
- Develop: Add QBertLayer
- clean up and fix typos
- Develop: Add QBertIntermediate and QBertOutput (#103)
- Fix gpu_benchmark & embedding kernel profile
- Add a preliminary test script that demonstrates a quantized bert layer
- move embedding to kernels
- add unittest for AddBias and fix some typos
- fix typo
Created an issue in Tencent/TurboTransformers that received 4 comments
ONNXRT can not be applied in Albert
/opt/conda/lib/python3.7/site-packages/torch/onnx/utils.py:738: UserWarning: ONNX export failed on ATen operator einsum because torch.onnx.symbolic…
4
comments