Open
Description
Distributed processing with Hydra in single-node multi-GPU setting, as mentioned here.
- Explain PyTorch's distributed processing/training.
- Simple demonstration of various distributed communication primitives.
- Incorporate Hydra into PyTorch's distributed processing.
- Using multirun to run multiple processes.
This will serve as an introductory example for #38.
Metadata
Metadata
Assignees
Labels
No labels