-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
instantiate_class should be recursive #13279
Comments
I think this is fair. Thoughts @mauvilsa? Would you like to work on this? |
Thanks a lot for your reply! I realized a really basic recursive implementation of def rec_instantiate_class(config):
if isinstance(config, list):
return [rec_instantiate_class(cfg) for cfg in config]
if not isinstance(config, dict):
return config
for k, v in config.items():
config[k] = rec_instantiate_class(v)
if config.get("class_path") is None:
return config
return instantiate_class((), config) |
The Do note that it would be up to the developer to know when it is okay to use this function. For example, someone might want to Having said this, making |
Oh, I was expecting this feature. It would be highly valuable for my data generator. @mauvilsa do you have any suggestions to establish training config? I know there are pretty handy hydra and omegaconf, but I am afraid of mixing those in addition to Lightning CLI. |
This issue is from long ago. Right now I don't even think it is worth extending Regarding how to instantiate a previously trained model, I did come up with what I think is a proper solution. My proposal is in #18105. Unfortunately it hasn't received much attention. @pisarik I am not sure what you meant by "establish training config". |
@mauvilsa Thank you for the info. You are right! I just tried to write a config with nested classes and it worked out as a charm. Should this issue be closed then? |
This ticket is originally about instantiating a model manually, not for training. So I would say that this issue shouldn't be closed yet. |
Hi! Actually yeah the main issue is that reloading checkpoints when using dependency injection in general is not straightforward, because the hparams dict is only saved in the main LightningModule, so one has to manually reload the configuration that corresponds to the checkpoint, then instantiate the model, then load the checkpoint. The ideal situation would be to store the whole config in the checkpoint (not sure how easy and general it would be to implement though). If this is not possible, it would be great to have a convenient way to reinstantiate the model and datamodule from a yaml config generated by the CLI, and I guess |
🐛 Bug
I have a
LightningModule
that takes ann.Module
as argument. I use Lightning CLI to instantiate my model, however when I instantiate the model manually withinstantiate_class
, the dictionary containing the module param of my model is not instantiated.To Reproduce
Implementation of the (dummy) networks, file
models.py
:Config file (file
config.yaml
):Main script (file
main.py
):It outputs:
whereas
encoder
anddecoder
should be instantiated.Environment
- GPU:
- available: False
- version: None
- numpy: 1.21.5
- pyTorch_debug: False
- pyTorch_version: 1.11.0
- pytorch-lightning: 1.5.8
- tqdm: 4.64.0
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.10.4
- version: Add Gradient Checkpointing #49-Ubuntu SMP Wed May 18 13:28:06 UTC 2022
cc @Borda @carmocca @mauvilsa
The text was updated successfully, but these errors were encountered: