Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix bug for fp16 + delay_scale_loss_scale + sharding_stage1_overlap #8314

Merged
merged 1 commit into from
Apr 24, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions paddlenlp/trainer/trainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -1013,6 +1013,7 @@
self.timers and self.timers("optimizer-step").start()

if self.args.gradient_accumulation_steps > 1 and self._enable_delay_scale_loss():
paddle.device.synchronize()

Check warning on line 1016 in paddlenlp/trainer/trainer.py

View check run for this annotation

Codecov / codecov/patch

paddlenlp/trainer/trainer.py#L1016

Added line #L1016 was not covered by tests
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

不会影响性能吗?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

不会,这块必须不能有overlap。这儿需要有个同步,不然可能出现这个grad还没通讯完就做scale的问题。

for p in model._layers.parameters():
with paddle.no_grad():
if hasattr(p, "main_grad") and p.main_grad is not None:
Expand Down
Loading