Skip to content

Commit 930dacf

Browse files
authored
Use consistent markdown formatting for the AdamW paper (#2722)
1 parent 66a81f9 commit 930dacf

File tree

1 file changed

+6
-8
lines changed

1 file changed

+6
-8
lines changed

tensorflow_addons/optimizers/weight_decay_optimizers.py

Lines changed: 6 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@
2626
class DecoupledWeightDecayExtension:
2727
"""This class allows to extend optimizers with decoupled weight decay.
2828
29-
It implements the decoupled weight decay described by Loshchilov & Hutter
29+
It implements the decoupled weight decay described by [Loshchilov & Hutter]
3030
(https://arxiv.org/pdf/1711.05101.pdf), in which the weight decay is
3131
decoupled from the optimization steps w.r.t. to the loss function.
3232
For SGD variants, this simplifies hyperparameter search since it decouples
@@ -343,7 +343,7 @@ class OptimizerWithDecoupledWeightDecay(
343343
This class computes the update step of `base_optimizer` and
344344
additionally decays the variable with the weight decay being
345345
decoupled from the optimization steps w.r.t. to the loss
346-
function, as described by Loshchilov & Hutter
346+
function, as described by [Loshchilov & Hutter]
347347
(https://arxiv.org/pdf/1711.05101.pdf). For SGD variants, this
348348
simplifies hyperparameter search since it decouples the settings
349349
of weight decay and learning rate. For adaptive gradient
@@ -367,9 +367,8 @@ class SGDW(DecoupledWeightDecayExtension, tf.keras.optimizers.SGD):
367367
"""Optimizer that implements the Momentum algorithm with weight_decay.
368368
369369
This is an implementation of the SGDW optimizer described in "Decoupled
370-
Weight Decay Regularization" by Loshchilov & Hutter
371-
(https://arxiv.org/abs/1711.05101)
372-
([pdf])(https://arxiv.org/pdf/1711.05101.pdf).
370+
Weight Decay Regularization" by [Loshchilov & Hutter]
371+
(https://arxiv.org/pdf/1711.05101.pdf).
373372
It computes the update step of `tf.keras.optimizers.SGD` and additionally
374373
decays the variable. Note that this is different from adding
375374
L2 regularization on the variables to the loss. Decoupling the weight decay
@@ -447,9 +446,8 @@ class AdamW(DecoupledWeightDecayExtension, tf.keras.optimizers.Adam):
447446
"""Optimizer that implements the Adam algorithm with weight decay.
448447
449448
This is an implementation of the AdamW optimizer described in "Decoupled
450-
Weight Decay Regularization" by Loshch ilov & Hutter
451-
(https://arxiv.org/abs/1711.05101)
452-
([pdf])(https://arxiv.org/pdf/1711.05101.pdf).
449+
Weight Decay Regularization" by [Loshchilov & Hutter]
450+
(https://arxiv.org/pdf/1711.05101.pdf).
453451
454452
It computes the update step of `tf.keras.optimizers.Adam` and additionally
455453
decays the variable. Note that this is different from adding L2

0 commit comments

Comments
 (0)