Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[AutoParallel] Add Sequence Parallel for Static LLaMA #7746
[AutoParallel] Add Sequence Parallel for Static LLaMA #7746
Changes from all commits
404a6d0
df241fc
70b2b9d
cb948da
05762a3
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
Check warning on line 370 in paddlenlp/transformers/llama/modeling_auto.py
Codecov / codecov/patch
paddlenlp/transformers/llama/modeling_auto.py#L367-L370
Check warning on line 376 in paddlenlp/transformers/llama/modeling_auto.py
Codecov / codecov/patch
paddlenlp/transformers/llama/modeling_auto.py#L376
Check warning on line 404 in paddlenlp/transformers/llama/modeling_auto.py
Codecov / codecov/patch
paddlenlp/transformers/llama/modeling_auto.py#L401-L404
Check warning on line 487 in paddlenlp/transformers/llama/modeling_auto.py
Codecov / codecov/patch
paddlenlp/transformers/llama/modeling_auto.py#L483-L487
Check warning on line 493 in paddlenlp/transformers/llama/modeling_auto.py
Codecov / codecov/patch
paddlenlp/transformers/llama/modeling_auto.py#L493
Check warning on line 609 in paddlenlp/transformers/llama/modeling_auto.py
Codecov / codecov/patch
paddlenlp/transformers/llama/modeling_auto.py#L606-L609
Check warning on line 615 in paddlenlp/transformers/llama/modeling_auto.py
Codecov / codecov/patch
paddlenlp/transformers/llama/modeling_auto.py#L615
Check warning on line 626 in paddlenlp/transformers/llama/modeling_auto.py
Codecov / codecov/patch
paddlenlp/transformers/llama/modeling_auto.py#L623-L626
Check warning on line 632 in paddlenlp/transformers/llama/modeling_auto.py
Codecov / codecov/patch
paddlenlp/transformers/llama/modeling_auto.py#L632
Check warning on line 901 in paddlenlp/transformers/llama/modeling_auto.py
Codecov / codecov/patch
paddlenlp/transformers/llama/modeling_auto.py#L901
Check warning on line 904 in paddlenlp/transformers/llama/modeling_auto.py
Codecov / codecov/patch
paddlenlp/transformers/llama/modeling_auto.py#L903-L904
Check warning on line 908 in paddlenlp/transformers/llama/modeling_auto.py
Codecov / codecov/patch
paddlenlp/transformers/llama/modeling_auto.py#L906-L908
Check warning on line 914 in paddlenlp/transformers/llama/modeling_auto.py
Codecov / codecov/patch
paddlenlp/transformers/llama/modeling_auto.py#L914
Check warning on line 928 in paddlenlp/transformers/llama/modeling_auto.py
Codecov / codecov/patch
paddlenlp/transformers/llama/modeling_auto.py#L927-L928
Check warning on line 930 in paddlenlp/transformers/llama/modeling_auto.py
Codecov / codecov/patch
paddlenlp/transformers/llama/modeling_auto.py#L930
Check warning on line 1204 in paddlenlp/transformers/llama/modeling_auto.py
Codecov / codecov/patch
paddlenlp/transformers/llama/modeling_auto.py#L1201-L1204
Check warning on line 1210 in paddlenlp/transformers/llama/modeling_auto.py
Codecov / codecov/patch
paddlenlp/transformers/llama/modeling_auto.py#L1210
Check warning on line 1215 in paddlenlp/transformers/llama/modeling_auto.py
Codecov / codecov/patch
paddlenlp/transformers/llama/modeling_auto.py#L1215