Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Docathon][Fix COPY-FROM No.11-15] #6310

Closed
wants to merge 12 commits into from
Prev Previous commit
Next Next commit
Update AdamW_cn.rst
改成 copy-from 的形式
  • Loading branch information
Turingg authored Nov 14, 2023
commit 036f604f57cadfa0da2c525bdd052cb404338b14
19 changes: 1 addition & 18 deletions docs/api/paddle/optimizer/AdamW_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -142,24 +142,7 @@ set_lr_scheduler(scheduler)

**代码示例**

.. code-block:: python

import paddle
linear = paddle.nn.Linear(10, 10)
adam = paddle.optimizer.AdamW(weight_decay=0.01,
learning_rate=0.1, parameters=linear.parameters())
# set learning rate manually by class LRScheduler
scheduler = paddle.optimizer.lr.MultiStepDecay(learning_rate=0.5, milestones=[2,4,6], gamma=0.8)
adam.set_lr_scheduler(scheduler)
lr = adam.get_lr()
print("current lr is {}".format(lr))
# current lr is 0.5
# set learning rate manually by another LRScheduler
scheduler = paddle.optimizer.lr.StepDecay(learning_rate=0.1, step_size=5, gamma=0.6)
adam.set_lr_scheduler(scheduler)
lr = adam.get_lr()
print("current lr is {}".format(lr))
# current lr is 0.1
COPY-FROM: paddle.optimizer.set_lr_scheduler
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
COPY-FROM: paddle.optimizer.set_lr_scheduler
COPY-FROM: paddle.optimizer.AdamW.set_lr_scheduler


get_lr()
'''''''''
Expand Down