Open
Description
Bug description
Based on the current callback orders, The Finetuning
class will always be incompatible with any LightningModule
that utilize configure_model
method. The current callback sequence is Callback.setup
-> LightningModule.configure_model
-> LightningModule.configure_optimizers
-> Callback.on_fit_start
. However, The BaseFinetuning
calls freeze_before_training
at setup
, where modules inside the configure_model
has not been instantiated yet.
What version are you seeing the problem on?
v2.1
How to reproduce the bug
from lightning import LightningModule
import torch
from torchvision import models
class MyModel(LightningModule):
def configure_model(self):
self.backbone = models.resnet18()
def configure_optimizers(self):
return torch.optim.SGD(lr=1e-3)
Error messages and logs
# Error messages and logs here please
Environment
Current environment
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
#- PyTorch Lightning Version (e.g., 1.5.0):
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 2.0):
#- Python version (e.g., 3.9):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
#- Running environment of LightningApp (e.g. local, cloud):
More info
No response