Skip to content

LightningModule self.log add_dataloader_idx doesn't reduce properly the metric across dataloaders #11126

@tchaton

Description

@tchaton

🐛 Bug

To Reproduce

The current behavior with add_dataloader_idx seems quite confusing to me. As a user, I don't know if I would expect to get the value reduced across all dataloaders and be added to both results objects.

def test_multiple_dataloaders_logging(tmpdir):
    class TestModel(BoringModel):
        def validation_step(self, batch, batch_idx, dataloader_idx):
            self.log("value_1", dataloader_idx, add_dataloader_idx=False)
            self.log("value_2", dataloader_idx, add_dataloader_idx=True)

        def val_dataloader(self):
            return [self.train_dataloader(), self.train_dataloader()]

    model = TestModel()
    model.validation_epoch_end = None
    trainer = Trainer(default_root_dir=tmpdir)
    results = trainer.validate(model)
    assert results == [
        {"value_2/dataloader_idx_0": 0.0, "value_1": 0.5},
        {"value_2/dataloader_idx_1": 1.0, "value_1": 0.5},
    ]

Expected behavior

Environment

  • PyTorch Lightning Version (e.g., 1.5.0):
  • PyTorch Version (e.g., 1.10):
  • Python version (e.g., 3.9):
  • OS (e.g., Linux):
  • CUDA/cuDNN version:
  • GPU models and configuration:
  • How you installed PyTorch (conda, pip, source):
  • If compiling from source, the output of torch.__config__.show():
  • Any other relevant information:

Additional context

cc @tchaton

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingpriority: 1Medium priority task

    Type

    No type

    Projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions