Block or Report
Block or report wanchaol
Report abuse
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abusePinned
-
pytorch/pytorch Public
Tensors and Dynamic neural networks in Python with strong GPU acceleration
-
-
717 contributions in the last year
Less
More
Activity overview
Contribution activity
February 2023
Created 9 commits in 2 repositories
Created a pull request in pytorch/pytorch that received 12 comments
Opened 11 other pull requests in 2 repositories
pytorch/pytorch
5
open
5
closed
- [not for land] allgather experiment
- [dtensor] support creating DTensor in submesh
- [dtensor][BE] remove redundant tests
- [tp] minor update to TP docs
- [dtensor] add checkpointing example
- [dtensor] remove typing hack of DeviceMesh
- [reland] add numpy typing plugin to mypy config
- [dtensor] group public APIs together
- [dtensor] update readme for prototype release
- [dtensor] add split_with_sizes op
pytorch/tau
1
merged
Reviewed 19 pull requests in 2 repositories
pytorch/pytorch
17 pull requests
- [WIP][Functional Collectives] Migrate DeviceMesh::all_reduce to use functional all_reduce.
- [PT-D][Sequence Parallelism] Enable DTensor based Naive sequence parallelism
- Fix c10d regression during cleanup.
- Inductor support for aten::all_reduce
- [BE][1/N] Add deprecate msg to Sharded Partial and Replicate Tensor
- [PTD][DCP] Add 1D DTensor based DCP
- [SPMD] Pull the minimal working distribute API and SPMD module to PyTorch
- [PTD] Introduce tracing friendly collectives.
- [PT-D][BE] Update 2D parallelism API name and docs
- [PT-D][Tensor parallelism] Add documentations for TP
- [WIP] all-to-all reshuffling in DTensor redistribute
- [DDP] Move gradient setting to None to python
- [small] multithreaded-pg guard attr
- add is_complex op for ShardedTensor
- [DTensor] implement dist_split as a sharding prop rule
- [DTensor][fix] MultiThreadedTestCase misses _tls object and it won't reflect in CI
- [DTensor] fix DTensorSpec dim_map description


