Skip to content

Commit

Permalink
Update CODEOWNERS and minor docstring fix (#5002)
Browse files Browse the repository at this point in the history
This PR includes:

* Previous CODEOWNERS was encompassing more files than just training files
* Polynomial optimizer config is missing part of its docstring
  • Loading branch information
Thiago Crepaldi authored Sep 3, 2020
1 parent 546965c commit 9d1bdef
Show file tree
Hide file tree
Showing 2 changed files with 9 additions and 3 deletions.
8 changes: 5 additions & 3 deletions CODEOWNERS
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,9 @@ orttraining/*.py @thiagocrepaldi @spandantiwari @BowenBao @liqunfu
orttraining/orttraining/python/** @thiagocrepaldi @spandantiwari @BowenBao @liqunfu
orttraining/orttraining/test/python/** @thiagocrepaldi @spandantiwari @BowenBao @liqunfu
orttraining/pytorch_frontend_examples/** @thiagocrepaldi @spandantiwari @BowenBao @liqunfu
onnxruntime/*.py @thiagocrepaldi @spandantiwari @BowenBao @liqunfu
onnxruntime/python/** @thiagocrepaldi @spandantiwari @BowenBao @liqunfu
onnxruntime/test/python/** @thiagocrepaldi @spandantiwari @BowenBao @liqunfu
onnxruntime/python/training/** @thiagocrepaldi @spandantiwari @BowenBao @liqunfu
onnxruntime/test/python/onnxruntime_test_ort_trainer.py @thiagocrepaldi @spandantiwari @BowenBao @liqunfu
onnxruntime/test/python/onnxruntime_test_ort_trainer_with_mixed_precision.py @thiagocrepaldi @spandantiwari @BowenBao @liqunfu
onnxruntime/test/python/onnxruntime_test_training_unit_tests.py @thiagocrepaldi @spandantiwari @BowenBao @liqunfu
onnxruntime/test/python/onnxruntime_test_training_unittest_utils.py @thiagocrepaldi @spandantiwari @BowenBao @liqunfu
samples/python/** @thiagocrepaldi @spandantiwari @BowenBao @liqunfu
Original file line number Diff line number Diff line change
Expand Up @@ -232,7 +232,11 @@ class PolyWarmupLRScheduler(_LRScheduler):
Learning rate update strategy:
When current_step < warmup
lr = base_lr * (current_step / max(1, num_warmup_steps))
When current_step > total_steps
lr = lr_end / lr
Otherwise
lr = decay / lr, where decay is
(lr - lr_end) * (1 - (current_step - num_warmup_steps) / (total_steps - num_warmup_steps)) ** power + lr_end
Args:
total_steps (int): total training steps for learning.
Expand Down

0 comments on commit 9d1bdef

Please sign in to comment.