Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add nccl logical 1d P to S(i) #8361

Merged
merged 10 commits into from
Jun 4, 2022
Merged

Conversation

guo-ran
Copy link
Contributor

@guo-ran guo-ran commented Jun 2, 2022

} else if (CanSplitAtDim(0)
&& (src_sbp.has_partial_sum_parallel() && dst_sbp.has_split_parallel())
&& (dst_sbp.split_parallel().axis() > 0)) {
// P->S(0) : ReduceScatter Noncontinuous
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P->S(1)

REGISTER_REDUCE_SCATTER_NONCONTINUOUS_KERNEL(int8_t)
REGISTER_REDUCE_SCATTER_NONCONTINUOUS_KERNEL(int32_t)
REGISTER_REDUCE_SCATTER_NONCONTINUOUS_KERNEL(int64_t)
REGISTER_REDUCE_SCATTER_NONCONTINUOUS_KERNEL(float)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

要支持 bool 类型

transpose_in_dim_vec.data(), in->dptr(), perm.data(), tmp_buffer->mut_dptr());

OF_NCCL_CHECK(ncclReduceScatter(tmp_buffer->dptr(), out->mut_dptr(), out->shape().elem_cnt(),
GetNcclDataType(in->data_type()), ncclRedOp_t::ncclSum,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

当是 bool 类型时,使用 ncclMax

@chengtbf chengtbf added the graph graph mode label Jun 2, 2022
@github-actions
Copy link
Contributor

github-actions bot commented Jun 2, 2022

Speed stats:

@github-actions
Copy link
Contributor

github-actions bot commented Jun 3, 2022

View latest API docs preview at: https://staging.oneflow.info/docs/Oneflow-Inc/oneflow/pr/8361/

@@ -218,6 +218,20 @@ bool TryBuildNcclBy1DHierarchy(OperatorConf* ret, const SbpParallel& src_sbp,
.Build()
.op_conf();
return true;
} else if (CanSplitAtDim(0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里写错了吧? 应该是:

CanSplitAtDim(dst_sbp.split_parallel().axis()) 

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

哦哦哦,是的,我改一下

@github-actions
Copy link
Contributor

github-actions bot commented Jun 3, 2022

View latest API docs preview at: https://staging.oneflow.info/docs/Oneflow-Inc/oneflow/pr/8361/

@github-actions
Copy link
Contributor

github-actions bot commented Jun 3, 2022

Speed stats:
GPU Name: NVIDIA GeForce GTX 1080 

❌ OneFlow resnet50 time: 130.3ms (= 13030.5ms / 100, input_shape=[16, 3, 224, 224])
PyTorch resnet50 time: 145.2ms (= 14523.5ms / 100, input_shape=[16, 3, 224, 224])
✔️ Relative speed: 1.11 (= 145.2ms / 130.3ms)

OneFlow resnet50 time: 76.6ms (= 7664.7ms / 100, input_shape=[8, 3, 224, 224])
PyTorch resnet50 time: 83.5ms (= 8347.0ms / 100, input_shape=[8, 3, 224, 224])
✔️ Relative speed: 1.09 (= 83.5ms / 76.6ms)

OneFlow resnet50 time: 53.8ms (= 10750.2ms / 200, input_shape=[4, 3, 224, 224])
PyTorch resnet50 time: 60.6ms (= 12129.3ms / 200, input_shape=[4, 3, 224, 224])
✔️ Relative speed: 1.13 (= 60.6ms / 53.8ms)

OneFlow resnet50 time: 42.4ms (= 8482.3ms / 200, input_shape=[2, 3, 224, 224])
PyTorch resnet50 time: 41.0ms (= 8197.6ms / 200, input_shape=[2, 3, 224, 224])
❌ Relative speed: 0.97 (= 41.0ms / 42.4ms)

OneFlow resnet50 time: 37.9ms (= 7583.0ms / 200, input_shape=[1, 3, 224, 224])
PyTorch resnet50 time: 39.7ms (= 7946.5ms / 200, input_shape=[1, 3, 224, 224])
✔️ Relative speed: 1.05 (= 39.7ms / 37.9ms)

OneFlow swin dataloader time: 0.251s (= 50.143s / 200, num_workers=1)
PyTorch swin dataloader time: 0.150s (= 29.966s / 200, num_workers=1)
Relative speed: 0.598 (= 0.150s / 0.251s)

OneFlow swin dataloader time: 0.104s (= 20.815s / 200, num_workers=4)
PyTorch swin dataloader time: 0.042s (= 8.379s / 200, num_workers=4)
Relative speed: 0.403 (= 0.042s / 0.104s)

OneFlow swin dataloader time: 0.035s (= 7.056s / 200, num_workers=8)
PyTorch swin dataloader time: 0.022s (= 4.460s / 200, num_workers=8)
Relative speed: 0.632 (= 0.022s / 0.035s)

❌ OneFlow resnet50 time: 146.0ms (= 14596.9ms / 100, input_shape=[16, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 171.3ms (= 17130.8ms / 100, input_shape=[16, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.17 (= 171.3ms / 146.0ms)

OneFlow resnet50 time: 96.1ms (= 9611.7ms / 100, input_shape=[8, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 113.3ms (= 11333.1ms / 100, input_shape=[8, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.18 (= 113.3ms / 96.1ms)

OneFlow resnet50 time: 70.7ms (= 14149.5ms / 200, input_shape=[4, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 84.5ms (= 16903.4ms / 200, input_shape=[4, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.19 (= 84.5ms / 70.7ms)

OneFlow resnet50 time: 60.5ms (= 12106.9ms / 200, input_shape=[2, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 79.2ms (= 15845.4ms / 200, input_shape=[2, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.31 (= 79.2ms / 60.5ms)

OneFlow resnet50 time: 55.9ms (= 11187.3ms / 200, input_shape=[1, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 71.1ms (= 14210.2ms / 200, input_shape=[1, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.27 (= 71.1ms / 55.9ms)

@guo-ran guo-ran merged commit 210d23f into master Jun 4, 2022
@guo-ran guo-ran deleted the dev_add_reduce_scatter_noncontiguous branch June 4, 2022 03:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants