Create your own GitHub profile
Sign up for your own profile on GitHub, the best place to host code, manage projects, and build software alongside 50 million developers.
Sign up-
Hugging Face
- Chinatown, New York
- Sign in to view email
- Lysand.re
Pinned
1,681 contributions in the last year
Activity overview
Contributed to
huggingface/transformers,
ProjectBlackFalcon/Roks,
huggingface/swift-coreml-transformers
and 5 other
repositories
Contribution activity
May 2020
Created a pull request in huggingface/transformers that received 2 comments
- [MarianTokenizer] implement save_vocabulary and other common methods
- [cleanup] test_tokenization_common.py
- [Community notebooks] General notebooks
- [TF T5] More coherent naming for inputs
- [T5 Conf] rename docstring to acuatly argument names
- [T5 fp16] Fix fp16 in T5
- [test_pipelines] Mark tests > 10s @slow, small speedups
- Adding optimizations block from ONNXRuntime.
- [TF generate] Fix issue for batch output generation of different output length.
- rerun notebook 02-transformers
- Distributed eval: SequentialDistributedSampler + gather all results
- Allow for None gradients in GradientAccumulator.
- Conversion script to export transformers models to ONNX IR.
- Fix nn.DataParallel compatibility in PyTorch 1.5
- Fix: unpin flake8 and fix cs errors
- [Docs, Notebook] Include generation pipeline
- Question Answering for TF trainer
- [Marian Fixes] prevent predicting pad_token_id before softmax, support language codes, name multilingual models
- added functionality for electra classification head
- Simplify cache vars and allow for TRANSFORMERS_CACHE env
- Allow gpt2 to be exported to valid ONNX
- [tests] make pipelines tests faster with smaller models
- Add migrating from `pytorch-transformers`
- Align sentiment-analysis' tokenizer (currently cased) to the model (uncased)
- Tokenizer.batch_decode convenience method
- Some pull request reviews not shown.
5
contributions
in private repositories
May 6 – May 13