Highlights
- Arctic Code Vault Contributor
Popular repositories
2,861 contributions in the last year
Less
More
Activity overview
Contributed to
PyTorchLightning/pytorch-lightning,
nicolas-chaulet/torch-points3d,
tchaton/lightning-geometric
and 5 other
repositories
Contribution activity
March 2021
Created 31 commits in 3 repositories
Created a pull request in PyTorchLightning/pytorch-lightning that received 9 comments
[refactor] Move save_function to accelerator 1/n [DeepSpeed]
What does this PR do? This PR refactor save_checkpoint to be trainer - checkpoint_connector - accelerator - training type plugin responsibility. Be…
+136
−123
•
9
comments
Opened 23 other pull requests in 2 repositories
PyTorchLightning/pytorch-lightning
18
merged
1
open
3
closed
- [bugfix] Resolve schedule step bug for PyTorch Profiler
- [refactor] Add setup to profilers + _run_stage_setup to trainer 2/5
- Add PyTorch 1.8 Profiler 5/5
- [doc] Update Dict Train Loader doc.
- Flash predict step
-
[doc] Add Zero Grad
set_to_none=Truetrick - Feat/ds update
- [changelog] Update Changelog on release v1.2.3
- [release] Weekly Patch Release v.1.2.3 [full merge, no squash]
- [release] Weekly Patch Release v.1.2.3 [full merge, no squash]
- [warning] Add warning when values are not being reduced
- [bug] All_gather support tensor on cpu
- [bug] Update broadcast + reduce decision ModelCheckpoint]
- [bugfix] Resolve hanging in DDP
- [bugfix] Resolve memory leak for evaluation
- [bugfix] Perform reduction for dict in training_step and DP
- [bugfix] Check LightningOptimizer doesn't delete optimizer hooks
- [bugfix] TPU + all_gather + SingleTPU shouldn't call xm.all_gather
- [doc] Improve Manual Optimization Example
- [CI] Pytorch 1.8 testing
- [bugfix] TPU test hangs to barrier on 1 process
- [bug] Fix Pytorch profiler with emit_nvtx
PyTorchLightning/lightning-flash
1
merged
Reviewed 149 pull requests in 5 repositories
PyTorchLightning/pytorch-lightning 127 pull requests
- Do not add return dict items to callback_metrics
- Profilers - do not write when there's no summary
- Add model parallel setup hook
- [bugfix] Resolve schedule step bug for PyTorch Profiler
- Fix checkpoint callback & Trainer.test(_) issue for TPUs
- add copyr
- fix: update example autoencoder.py to reflect args
-
MetricsHolderclean-up + typing - Weekly Patch Release v.1.2.5 [full merge, no squash]
- Add PyTorch 1.8 Profiler 5/5
- Docs/robots
- Fix disabled grads after call to predict
- Refactor PyTorch profiler 4/5
- Flash predict step
- Refactor base profilers 3/5
- [refactor] Add setup to profilers + _run_stage_setup to trainer 2/5
- Simplify deprecations
- hotfix: mock examples
- Allow training type plugin to delay optimizer creation (FSDP 2/n)
- Fix csv extension check
- [Horovod] Fix Reduce for Horovod
- Don't update checkpoint in case of using Hydra
- Feature/double precision
- [1/N] Define dataclasses for progress tracking
- Clean utilities/argparse and add missing tests
- Some pull request reviews not shown.
PyTorchLightning/metrics 11 pull requests
PyTorchLightning/lightning-flash 9 pull requests
- DataPipeline PoC
- refactoring setup
- [refactor] DataPipeline 1/n
- expose scheduler to Task
- Prototype modifications made for RANZCR kaggle challenge
- switch to torchmetrics
- add multilabel option
- Add download support for tar.gz & don't download data if exists
- Update new models from new torchvision version
facebookresearch/fairscale 1 pull request
PyTorchLightning/lightning-bolts 1 pull request
Created an issue in PyTorchLightning/pytorch-lightning that received 4 comments
trainer.training_type_plugin.broadcast doesn't seem to work properly
Bugs with code …
4
comments
Opened 5 other issues in 3 repositories
PyTorchLightning/pytorch-lightning
3
open
pytorch/xla
1
open
microsoft/DeepSpeed
1
closed
3
contributions
in private repositories
Mar 23 – Mar 25