Description
Is your feature request related to a problem? Please describe.
I was trying to use multi_gpu_supervised_trainer in my existing workflow that currently uses monai's SupervisedTrainer, but the switch was not straightforward because of the create_multi_gpu_supervised_trainer inheriting directly from Ignite's trainer class. There are also no proper tutorials explaining how to use this multi_gpu class for more realistic workflows.
Describe the solution you'd like
Ideally, I'd expect the SupervisedTrainer to support multi-gpu workloads without needing a separate class.
Describe alternatives you've considered
I am considering other pytorch wrappers like lightning or catalyst. The tutorials seem to use different approaches for multi-gpu workloads and I'd really like to see a default approach for this. Since monai uses ignite as the base class for its trainer implementation, I thought it is the default approach, but it's not clear.
Additional context
What is the preferred approach in MONAI to do multi-gpu training?