-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
multi GPU training? #22
Comments
You can use torch.nn.DataParallel to train the model using multi-GPU. See here Specifically, if you want to train the pose-guided person image generation task, you can modify the "__ init __" function in pose_model.py. Add self.net_G = torch.nn.DataParallel(self.net_G, device_ids=self.gpu_ids)
self.net_D = torch.nn.DataParallel(self.net_D, device_ids=self.gpu_ids) |
Currently, only the face animation model supports multi-GPU training. |
@RenYurui Thank you very much! |
Hi @RenYurui, Nice work! It seems like all data is only loaded on first GPU in your code as show below:
I tried to replace above with just .cuda() but still I am not able to spread batch data across multiple GPU's and first GPU is running out of memory when I uses larger batch size. Is it the case that your custom built CUDA operations don't support multiple GPUs? Thanks, |
I set the gpu_ids 2,3, but the program only runs on the GPU 2. Could you please tell me is the code support for multi GPU training? Thank you!
The text was updated successfully, but these errors were encountered: