Popular repositories
-
-
-
-
Forked from fastai/fastai
The fast.ai deep learning library, lessons, and tutorials
-
2,965 contributions in the last year
Activity overview
Contribution activity
January 2021
Created 51 commits in 4 repositories
Created a pull request in huggingface/transformers that received 6 comments
New run_seq2seq script
What does this PR do?
This PR adds a new version of finetune_trainer using datasets and completely self-contained (not using anything in the utils …
+650
−1
•
6
comments
Opened 30 other pull requests in 2 repositories
huggingface/transformers
2
open
25
merged
1
closed
- Add `report_to` training arguments to control the integrations used
- Fixes to run_seq2seq and instructions
- Fix memory regression in Seq2Seq example
- Fix WAND_DISABLED test
- Fix model templates and use less than 119 chars
- Fix Funnel Transformer conversion script
- Add a community page to the docs
- Restrain tokenizer.model_max_length default
- Use datasets squad_v2 metric in run_qa
- Fix GPT conversion script
- Fix old Seq2SeqTrainer
- Fix imports in conversion scripts
- Fix Trainer with a parallel model
- Upstream (and rename) sortish sampler
- Switch metrics in run_ner to datasets
- Fix data parallelism in Trainer
- Use the right version of tokenizers
- Use the right version of tokenizers for torchhub
- Refactor `prepare_seq2seq_batch`
- Make doc styler behave properly on Windows
- Make doc styler detect lists on rst and better support for Windows
- Fast imports part 3
- Transformers fast import part 2
- Fast transformers import part 1
- Remove nested lxmert
- Some pull requests not shown.
huggingface/datasets
2
merged
Reviewed 96 pull requests in 4 repositories
huggingface/transformers 92 pull requests
- Fix some TF slow tests
- Fix TF s2s models
- Fix memory regression in Seq2Seq example
- [trainer] no --deepspeed and --sharded_ddp together
- [PR/Issue templates] normalize, group, sort + add myself for deepspeed
- Fix mixed precision in TF models
- Fix TFTrainer prediction output
- Add head_mask/decoder_head_mask for TF BART models
- [deepspeed] fix the backward for deepspeed
- Fix Trainer and Args to mention AdamW, not Adam.
- Add notebook
- Add a community page to the docs
- Add separated decoder_head_mask for T5 Models
- Add t5 convert to transformers-cli
- Fix TF Flaubert and XLM
- Update integrations.py
- Renamed `nlp` variables #9455
- [WIP] Add new model docs
- Update `past_key_values` in GPT-2
- [deepspeed] --gradient_accumulation_steps fix
- Remove duplicated extras["retrieval"]
- Ignore lm_head decoder bias warning
- Fix label datatype in TF Trainer
- Add head_mask/decoder_head_mask for BART
- [TF Led] Fix wrong decoder attention mask behavior
- Some pull request reviews not shown.