Skip to content
Discussion options

You must be logged in to vote

To me it looks like you just need to set the frequency argument to 1/4 of the size of your training data:

    scheduler = {
          'scheduler': torch.optim.lr_scheduler.ReduceLROnPlateau(optim,
                                                                  mode='max', factor=0.75,
                                                                  patience=2, verbose=True),
          'interval': 'step',
          'frequency': int(len(self.train_dataloader()) * 0.25),
          'strict': True,
          'monitor': 'val_acc_epoch',
      }

Replies: 2 comments 1 reply

Comment options

You must be logged in to vote
1 reply
@sidml
Comment options

Answer selected by sidml
Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment