Skip to content

【Auto-Parallel】support sharding overlap in PIR #69390

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Nov 18, 2024

Conversation

liym27
Copy link
Contributor

@liym27 liym27 commented Nov 14, 2024

PR Category

Performance Optimization

PR Types

Performance

Description

support sharding overlap in PIR.

llama7b, auto parallel static

speedup
sharding_stage1 baseline
all_gather overlap +4.95%
reduce_scatter + 11.86%
all_gather + reduce_scatter +18.34%

PCard-86802

TODO:shard_param and slice_param are share_var, the dependency should be constructed automatically.

Copy link

paddle-bot bot commented Nov 14, 2024

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@liym27 liym27 changed the title reduce_scatter overlap 【Auto-Parallel】support sharding overlap in PIR Nov 14, 2024
@liym27 liym27 force-pushed the sharding_overlap_1108 branch 2 times, most recently from 83dd5d8 to 95d774f Compare November 15, 2024 19:10
@liym27 liym27 force-pushed the sharding_overlap_1108 branch from 95d774f to 4ded8d8 Compare November 15, 2024 19:12
zhiqiu
zhiqiu previously approved these changes Nov 18, 2024
Copy link
Contributor

@zhiqiu zhiqiu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, to be refined in the future.

Comment on lines 385 to 395
tmp = paddle._C_ops.nop(opt_op.results()[0])
tmp.get_defining_op().set_execution_stream(
AutoParallelStreamType.SHARDING_STREAM.value
)

allgather_value = paddle._C_ops.all_gather(
shard_param, self._sharding_group.id, self._sharding_degree
)
allgather_value.get_defining_op().set_execution_stream(
AutoParallelStreamType.SHARDING_STREAM.value
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shard_param and slice_param are share_var, the dependency should be constructed automatically.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, we will discuss it offline.

@liym27 liym27 merged commit d774689 into PaddlePaddle:develop Nov 18, 2024
27 of 28 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants