-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Open
Labels
bugSomething isn't workingSomething isn't workingpriority: 1Medium priority taskMedium priority task
Milestone
Description
🐛 Bug
To Reproduce
The current behavior with add_dataloader_idx
seems quite confusing to me. As a user, I don't know if I would expect to get the value reduced across all dataloaders and be added to both results objects.
def test_multiple_dataloaders_logging(tmpdir):
class TestModel(BoringModel):
def validation_step(self, batch, batch_idx, dataloader_idx):
self.log("value_1", dataloader_idx, add_dataloader_idx=False)
self.log("value_2", dataloader_idx, add_dataloader_idx=True)
def val_dataloader(self):
return [self.train_dataloader(), self.train_dataloader()]
model = TestModel()
model.validation_epoch_end = None
trainer = Trainer(default_root_dir=tmpdir)
results = trainer.validate(model)
assert results == [
{"value_2/dataloader_idx_0": 0.0, "value_1": 0.5},
{"value_2/dataloader_idx_1": 1.0, "value_1": 0.5},
]
Expected behavior
Environment
- PyTorch Lightning Version (e.g., 1.5.0):
- PyTorch Version (e.g., 1.10):
- Python version (e.g., 3.9):
- OS (e.g., Linux):
- CUDA/cuDNN version:
- GPU models and configuration:
- How you installed PyTorch (
conda
,pip
, source): - If compiling from source, the output of
torch.__config__.show()
: - Any other relevant information:
Additional context
cc @tchaton
CharlesGaydon and arsedler9
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingpriority: 1Medium priority taskMedium priority task