Skip to content

[autoparallel] distinguish different parallel strategies #2699

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

YuliangLiu0306
Copy link
Contributor

@YuliangLiu0306 YuliangLiu0306 commented Feb 14, 2023

📌 Checklist before creating the PR

  • I have created an issue for this PR for traceability
  • The title follows the standard format: [doc/gemini/tensor/...]: A concise description
  • I have added relevant tags if possible for us to better distinguish different PRs

🚨 Issue number

Link this PR to your issue with words like fixed to automatically close the linked issue upon merge

e.g. fixed #1234, closed #1234, resolved #1234

📝 What does this PR do?

Distinguish different parallel strategies in strategy_generator. For example, we will generate dp only strategy if solver_preference is 'dp', tp only strategy if solver_preference is 'tp', mix strategies if solver_preference is 'standard'.

As the strategies order has been changed, the unit tests adapts to the lastest linear generator. Why not just remove the index? To make sure the following generators correct, we have to check the strategies one by one.

💥 Checklist before requesting a review

  • I have linked my PR to an issue (instruction)
  • My issue clearly describes the problem/feature/proposal, with diagrams/charts/table/code if possible
  • I have performed a self-review of my code
  • I have added thorough tests.
  • I have added docstrings for all the functions/methods I implemented

⭐️ Do you enjoy contributing to Colossal-AI?

  • 🌝 Yes, I do.
  • 🌚 No, I don't.

Tell us more if you don't enjoy contributing to Colossal-AI.

@YuliangLiu0306 YuliangLiu0306 force-pushed the feature/add_shard_option_for_linear_handler branch from 57e6024 to 79a451b Compare February 15, 2023 02:44
@YuliangLiu0306 YuliangLiu0306 force-pushed the feature/add_shard_option_for_linear_handler branch from 79a451b to 7a1914a Compare February 15, 2023 05:55
@github-actions
Copy link
Contributor

The code coverage for the changed files is 18%.

Click me to view the complete report
Name                                                                                                 Stmts   Miss  Cover
------------------------------------------------------------------------------------------------------------------------
colossalai/auto_parallel/tensor_shard/node_handler/linear_handler.py                                   117     94    20%
colossalai/auto_parallel/tensor_shard/node_handler/strategy/matmul_strategy_generator.py               390    321    18%
tests/test_auto_parallel/test_tensor_shard/test_gpt/test_runtime_with_gpt_modules.py                   133     94    29%
tests/test_auto_parallel/test_tensor_shard/test_node_handler/test_permute_and_transpose_handler.py     245    213    13%
tests/test_auto_parallel/test_tensor_shard/test_node_handler/test_softmax_handler.py                   126     98    22%
tests/test_auto_parallel/test_tensor_shard/test_node_handler/test_split_handler.py                     186    154    17%
tests/test_auto_parallel/test_tensor_shard/test_node_handler/test_view_handler.py                      185    154    17%
------------------------------------------------------------------------------------------------------------------------
TOTAL                                                                                                 1382   1128    18%

@FrankLeeeee
Copy link
Contributor

Hi, @YuliangLiu0306 remember to write an issue first and link your pull request to the issue for back-traceability. This can better help others understand your PR and follow up the feature progress.

@YuliangLiu0306 YuliangLiu0306 added the auto-parallel related to the auto-parallel feature label Feb 15, 2023
@YuliangLiu0306 YuliangLiu0306 linked an issue Feb 15, 2023 that may be closed by this pull request
@github-actions
Copy link
Contributor

The code coverage for the changed files is 18%.

Click me to view the complete report
Name                                                                                                 Stmts   Miss  Cover
------------------------------------------------------------------------------------------------------------------------
colossalai/auto_parallel/tensor_shard/node_handler/linear_handler.py                                   117     94    20%
colossalai/auto_parallel/tensor_shard/node_handler/strategy/matmul_strategy_generator.py               390    321    18%
tests/test_auto_parallel/test_tensor_shard/test_gpt/test_runtime_with_gpt_modules.py                   133     94    29%
tests/test_auto_parallel/test_tensor_shard/test_node_handler/test_permute_and_transpose_handler.py     245    213    13%
tests/test_auto_parallel/test_tensor_shard/test_node_handler/test_softmax_handler.py                   126     98    22%
tests/test_auto_parallel/test_tensor_shard/test_node_handler/test_split_handler.py                     186    154    17%
tests/test_auto_parallel/test_tensor_shard/test_node_handler/test_view_handler.py                      185    154    17%
------------------------------------------------------------------------------------------------------------------------
TOTAL                                                                                                 1382   1128    18%

@FrankLeeeee FrankLeeeee merged commit 1dc003c into hpcaitech:main Feb 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto-parallel related to the auto-parallel feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[FEATURE]: distinguish different parallel strategies
2 participants