Skip to content

Commit

Permalink
change num_train_epochs default value
Browse files Browse the repository at this point in the history
  • Loading branch information
DesmonDay committed Mar 13, 2024
1 parent e145bfc commit 604e75e
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions paddlenlp/trainer/training_args.py
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ class TrainingArguments:
The epsilon hyperparameter for the [`AdamW`] optimizer.
max_grad_norm (`float`, *optional*, defaults to 1.0):
Maximum gradient norm (for gradient clipping).
num_train_epochs(`float`, *optional*, defaults to 3.0):
num_train_epochs(`float`, *optional*, defaults to 1.0):
Total number of training epochs to perform (if not an integer, will perform the decimal part percents of
the last epoch before stopping training).
max_steps (`int`, *optional*, defaults to -1):
Expand Down Expand Up @@ -391,7 +391,7 @@ class TrainingArguments:
adam_epsilon: float = field(default=1e-8, metadata={"help": "Epsilon for AdamW optimizer."})
max_grad_norm: float = field(default=1.0, metadata={"help": "Max gradient norm."})

num_train_epochs: float = field(default=3.0, metadata={"help": "Total number of training epochs to perform."})
num_train_epochs: float = field(default=1.0, metadata={"help": "Total number of training epochs to perform."})
max_steps: int = field(
default=-1,
metadata={"help": "If > 0: set total number of training steps to perform. Override num_train_epochs."},
Expand Down

0 comments on commit 604e75e

Please sign in to comment.