Skip to content

Can Not Use Cuda When Running K-fold Cross Validation (bug in v0.0.6) #25

@LYH-YF

Description

@LYH-YF

When use a GPU to train any model with k-fold cross validation, it seems all right when running the first fold and starts to train model slowly when running the second fold. Actually, the GPU is not used to train model.
It is caused by the code of saving checkpoint. All the parameters(parameters in config object) are saved in a json file when saving checkpoint. The important is, config['device']=torch.device('cuda') can't be parsed to json format. However, this parameter is deleted directly from config object. So when running another fold, config['device'] can't be found, so the model is not on the GPU.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions