Skip to content

load_from_checkpoint does not work as expected #17304

@liujuncn

Description

@liujuncn

Bug description

I want to load a trained checkpoint to "gpu" in colab, but it seems that load_from_checkpoint loads two copies, and the device of the model is "cpu".

Screenshot from 2023-04-08 18-37-50

The memory of both host and gpu is occupied.

Screenshot from 2023-04-08 18-38-07

If i use:
model.to(torch.device('cuda'))

subsequent model predict will report the following error:
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)

How to solve this problem?

What version are you seeing the problem on?

2.0+

How to reproduce the bug

No response

Error messages and logs

# Error messages and logs here please

Environment

Current environment
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
#- PyTorch Lightning Version (e.g., 1.5.0):
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 2.0):
#- Python version (e.g., 3.9):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
#- Running environment of LightningApp (e.g. local, cloud):

More info

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingneeds triageWaiting to be triaged by maintainers

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions