Skip to content

Commit 4d030c4

Browse files
authored
Fix ModelCheckpoint callback for no loggers case (#18867)
1 parent 0843041 commit 4d030c4

File tree

2 files changed

+3
-0
lines changed

2 files changed

+3
-0
lines changed

src/lightning/pytorch/CHANGELOG.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,8 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
3636
- Fixed an issue when `BatchSizeFinder` `steps_per_trial` parameter ends up defining how many validation batches to run during the entire training ([#18394](https://github.com/Lightning-AI/lightning/issues/18394))
3737

3838

39+
- Fixed an issue saving the `last.ckpt` file when using `ModelCheckpoint` on a remote filesystem and no logger is used ([#18867](https://github.com/Lightning-AI/lightning/issues/18867))
40+
3941

4042
## [2.1.0] - 2023-10-11
4143

src/lightning/pytorch/callbacks/model_checkpoint.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -266,6 +266,7 @@ def setup(self, trainer: "pl.Trainer", pl_module: "pl.LightningModule", stage: s
266266
dirpath = self.__resolve_ckpt_dir(trainer)
267267
dirpath = trainer.strategy.broadcast(dirpath)
268268
self.dirpath = dirpath
269+
self._fs = get_filesystem(self.dirpath or "")
269270
if trainer.is_global_zero and stage == "fit":
270271
self.__warn_if_dir_not_empty(self.dirpath)
271272

0 commit comments

Comments
 (0)