Skip to content

Commit

Permalink
no_cuda does not take effect in non distributed environment (huggingf…
Browse files Browse the repository at this point in the history
…ace#23795)

Signed-off-by: Wang, Yi <yi.a.wang@intel.com>
  • Loading branch information
sywangyi authored and novice03 committed Jun 23, 2023
1 parent 8c9106c commit ed00d80
Showing 1 changed file with 3 additions and 1 deletion.
4 changes: 3 additions & 1 deletion src/transformers/training_args.py
Original file line number Diff line number Diff line change
Expand Up @@ -1684,7 +1684,9 @@ def _setup_devices(self) -> "torch.device":
)
device = torch.device("mps")
self._n_gpu = 1

elif self.no_cuda:
device = torch.device("cpu")
self._n_gpu = 0
else:
# if n_gpu is > 1 we'll use nn.DataParallel.
# If you only want to use a specific subset of GPUs use `CUDA_VISIBLE_DEVICES=0`
Expand Down

0 comments on commit ed00d80

Please sign in to comment.