Skip to content

Commit 2abef1e

Browse files
author
Shaden Smith
authored
Updating MPU docs (deepspeedai#92)
1 parent bca2305 commit 2abef1e

File tree

2 files changed

+3
-2
lines changed

2 files changed

+3
-2
lines changed

deepspeed/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ def initialize(args,
5454
step(), state_dict(), and load_state_dict() methods
5555
5656
mpu: Optional: A model parallelism unit object that implements
57-
get_model/data_parallel_group/rank/size()
57+
get_{model,data}_parallel_{rank,group,world_size}()
5858
5959
dist_init_required: Optional: Initializes torch.distributed
6060

docs/features.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -68,10 +68,11 @@ mpu.get_model_parallel_rank()
6868
mpu.get_model_parallel_group()
6969
mpu.get_model_parallel_world_size()
7070

71-
mpu.get_data_parallel_rank/group/world_size()
71+
mpu.get_data_parallel_rank()
7272
mpu.get_data_parallel_group()
7373
mpu.get_data_parallel_world_size()
7474
```
75+
7576
### Integration with Megatron-LM
7677
DeepSpeed is fully compatible with [Megatron](https://github.com/NVIDIA/Megatron-LM).
7778
Please see the [Megatron-LM tutorial](tutorials/MegatronGPT2Tutorial.md) for details.

0 commit comments

Comments
 (0)