Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: support DynamicLossScale for TrainStep #678

Merged
merged 1 commit into from
Jun 9, 2023

Conversation

geniuspatrick
Copy link
Collaborator

@geniuspatrick geniuspatrick commented Jun 8, 2023

Thank you for your contribution to the MindCV repo.
Before submitting this PR, please make sure:

Motivation

  • GradientAccumulation inherits from mindspore.boost.GradientAccumulation
  • Support all combinations of loss_scale_type and drop_overflow_update.
    1. loss_scale_type="fixed", drop_overflow_update=False
      --> update_cell=None, TrainStep=TrainOneStepCell(scale_sense=loss_scale)
    2. loss_scale_type: fixed, drop_overflow_update: True
      --> update_cell=FixedLossScaleUpdateCell, TrainStep=TrainOneStepWithLossScaleCell(scale_sense=update_cell)
    3. loss_scale_type: dynamic, drop_overflow_update: True
      --> update_cell=DynamicLossScaleUpdateCell, TrainStep=TrainOneStepWithLossScaleCell(scale_sense=update_cell)

Test Plan

(How should this PR be tested? Do you require special setup to run the test or repro the fixed bug?)

Related Issues and PRs

#256
#469
#483
#489
#569
#609
fixes #604

@geniuspatrick geniuspatrick merged commit e4e7cf0 into mindspore-lab:main Jun 9, 2023
@geniuspatrick geniuspatrick deleted the trainer branch June 9, 2023 07:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Gradients will be scaled two times when EMA is enabled and loss scaler type is static
3 participants