Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix npu sft ckpt load bug and no FA bug #8438

Merged
merged 1 commit into from
May 15, 2024

Conversation

NINGBENZHE
Copy link
Contributor

@NINGBENZHE NINGBENZHE commented May 14, 2024

PR types

Bug fixes

PR changes

Others

Description

fix npu sft ckpt load bug and no FA bug

Copy link

paddle-bot bot commented May 14, 2024

Thanks for your contribution!

Copy link

codecov bot commented May 14, 2024

Codecov Report

Attention: Patch coverage is 18.18182% with 9 lines in your changes are missing coverage. Please review.

Project coverage is 55.42%. Comparing base (05acad5) to head (1f0c170).
Report is 3 commits behind head on develop.

Files Patch % Lines
paddlenlp/transformers/llama/modeling_pp.py 16.66% 5 Missing ⚠️
paddlenlp/transformers/llama/modeling.py 20.00% 4 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #8438      +/-   ##
===========================================
- Coverage    55.42%   55.42%   -0.01%     
===========================================
  Files          617      617              
  Lines        96281    96286       +5     
===========================================
+ Hits         53366    53367       +1     
- Misses       42915    42919       +4     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@SylarTiaNII SylarTiaNII left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

model_config.attention_probs_dropout_prob = model_args.attention_probs_dropout_prob

model_config.sep_parallel_degree = training_args.sep_parallel_degree
model_config.tensor_parallel_output = True
Copy link
Collaborator

@ZHUI ZHUI May 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个我们好像加了开关的

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已修改

ZHUI
ZHUI previously approved these changes May 15, 2024
@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.


NINGBENZHE seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

Copy link
Collaborator

@ZHUI ZHUI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@wawltor wawltor merged commit 5170664 into PaddlePaddle:develop May 15, 2024
4 of 11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants