Skip to content

configure_model is incompatible with the BaseFinetuning behavior when fitting #19658

Open
@GdoongMathew

Description

@GdoongMathew

Bug description

Based on the current callback orders, The Finetuning class will always be incompatible with any LightningModule that utilize configure_model method. The current callback sequence is Callback.setup -> LightningModule.configure_model -> LightningModule.configure_optimizers -> Callback.on_fit_start. However, The BaseFinetuning calls freeze_before_training at setup, where modules inside the configure_model has not been instantiated yet.

What version are you seeing the problem on?

v2.1

How to reproduce the bug

from lightning import LightningModule
import torch
from torchvision import models
class MyModel(LightningModule):
    def configure_model(self):
        self.backbone = models.resnet18()
    def configure_optimizers(self):
        return torch.optim.SGD(lr=1e-3)

Error messages and logs

# Error messages and logs here please

Environment

Current environment
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
#- PyTorch Lightning Version (e.g., 1.5.0):
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 2.0):
#- Python version (e.g., 3.9):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
#- Running environment of LightningApp (e.g. local, cloud):

More info

No response

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions