-
-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inability to Utilize Specific CUDA Devices for Training in Raindrop Model #304
Comments
Hi there 👋, Thank you so much for your attention to PyPOTS! You can follow me on GitHub to receive the latest news of PyPOTS. If you find PyPOTS helpful to your work, please star⭐️ this repository. Your star is your recognition, which can help more people notice PyPOTS and grow PyPOTS community. It matters and is definitely a kind of contribution to the community. I have received your message and will respond ASAP. Thank you for your patience! 😃 Best, |
Hi Jiaying @islaxu, this bug has been fixed since the problem in issue #306 was solved. I've also tested it on our server with 8 GPUs by running the command
|
Thank you for your help! 😀 |
1. System Info
Python Version: 3.9
PyTorch Version: 2.1.0
CUDA Version: 12.1
GPU Model: NVIDIA RTX 4090
2. Information
3. Reproduction
device_list = ["cuda:3", "cuda:4"]
raindrop = Raindrop(
n_steps=resampled_data.shape[1],
n_features=resampled_data.shape[2],
n_classes=2,
n_layers=2,
……
device=device_list)
4. Expected behavior
My workaround was to add:
specific_device = torch.device("cuda:2")
torch.cuda.set_device(specific_device)
The text was updated successfully, but these errors were encountered: