Skip to content

Can create_multi_gpu_supervised_trainer inherit from monai's SupervisedTrainer instead of Ignite's create_supervised_trainer?  #5910

Closed
@chezhia

Description

@chezhia

Is your feature request related to a problem? Please describe.
I was trying to use multi_gpu_supervised_trainer in my existing workflow that currently uses monai's SupervisedTrainer, but the switch was not straightforward because of the create_multi_gpu_supervised_trainer inheriting directly from Ignite's trainer class. There are also no proper tutorials explaining how to use this multi_gpu class for more realistic workflows.

Describe the solution you'd like
Ideally, I'd expect the SupervisedTrainer to support multi-gpu workloads without needing a separate class.

Describe alternatives you've considered
I am considering other pytorch wrappers like lightning or catalyst. The tutorials seem to use different approaches for multi-gpu workloads and I'd really like to see a default approach for this. Since monai uses ignite as the base class for its trainer implementation, I thought it is the default approach, but it's not clear.

Additional context
What is the preferred approach in MONAI to do multi-gpu training?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions