How to use scheduler correctly ? #7245
-
I have a huge dataset, so I have set val_check_interval parameter to be 0.25. I want the scheduler to be called after each validation. So, during one epoch, there should be four validation checks. I want the scheduler to be updated based on these 4 intermediate values. For example, if the accuracy doesn't increase for two consecutive checks, I want the learning rate to be halved. I was hoping that the following code should be enough to accomplish this behavior, but it doesn't seem to be working. Any help is appreciated. Thanks.
|
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
To me it looks like you just need to set the scheduler = {
'scheduler': torch.optim.lr_scheduler.ReduceLROnPlateau(optim,
mode='max', factor=0.75,
patience=2, verbose=True),
'interval': 'step',
'frequency': int(len(self.train_dataloader()) * 0.25),
'strict': True,
'monitor': 'val_acc_epoch',
} |
Beta Was this translation helpful? Give feedback.
-
Here is how I got it working in PTL 2.5.1. I passed val_check_interval into my model and saved it as an hparam. Call self.trainer.fit_loop.setup_data()
frequency = int(len(self.trainer.train_dataloader) * self.hparams.val_check_interval)
print(f"ReduceLROnPlateau: frequency: {frequency}")
return {
'optimizer': optimizer,
'lr_scheduler': {
'scheduler': scheduler,
'monitor': 'val_loss', # which metric to watch
'interval': 'step',
'frequency': frequency,
'strict': True, # only run when val_loss is available
}
} |
Beta Was this translation helpful? Give feedback.
To me it looks like you just need to set the
frequency
argument to 1/4 of the size of your training data: