Skip to content

Commit

Permalink
Updating MPU docs (#92)
Browse files Browse the repository at this point in the history
  • Loading branch information
Shaden Smith authored Feb 20, 2020
1 parent bca2305 commit 2abef1e
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 2 deletions.
2 changes: 1 addition & 1 deletion deepspeed/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ def initialize(args,
step(), state_dict(), and load_state_dict() methods
mpu: Optional: A model parallelism unit object that implements
get_model/data_parallel_group/rank/size()
get_{model,data}_parallel_{rank,group,world_size}()
dist_init_required: Optional: Initializes torch.distributed
Expand Down
3 changes: 2 additions & 1 deletion docs/features.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,10 +68,11 @@ mpu.get_model_parallel_rank()
mpu.get_model_parallel_group()
mpu.get_model_parallel_world_size()

mpu.get_data_parallel_rank/group/world_size()
mpu.get_data_parallel_rank()
mpu.get_data_parallel_group()
mpu.get_data_parallel_world_size()
```

### Integration with Megatron-LM
DeepSpeed is fully compatible with [Megatron](https://github.com/NVIDIA/Megatron-LM).
Please see the [Megatron-LM tutorial](tutorials/MegatronGPT2Tutorial.md) for details.
Expand Down

0 comments on commit 2abef1e

Please sign in to comment.