Skip to content

Commit

Permalink
Add warning for Turing GPUs and CUDA <= 9000 (pytorch#21468)
Browse files Browse the repository at this point in the history
Summary:
Turing GPUs (compute capability 7.5) require CUDA10 to work properly.
We've seen some issues for these GPUs using PyTorch binaries with CUDA9 or older:
[Discussion Board pytorch#1](https://discuss.pytorch.org/t/cudnn-status-execution-failed-error/38575)
[Discussion Board pytorch#2](https://discuss.pytorch.org/t/cublas-runtime-error-on-gpu-running-but-works-on-cpu/46545/6)

Tested on using CUDA9 with an RTX 2080Ti.
Pull Request resolved: pytorch#21468

Differential Revision: D15696170

Pulled By: ezyang

fbshipit-source-id: ed43f4e4948d3f97ec8e7d7952110cbbfeafef2a
  • Loading branch information
ptrblck authored and facebook-github-bot committed Jun 7, 2019
1 parent 63d4bbb commit bad6701
Showing 1 changed file with 5 additions and 2 deletions.
7 changes: 5 additions & 2 deletions torch/cuda/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -110,8 +110,8 @@ def _check_driver():

def _check_capability():
incorrect_binary_warn = """
Found GPU%d %s which requires CUDA_VERSION >= %d for
optimal performance and fast startup time, but your PyTorch was compiled
Found GPU%d %s which requires CUDA_VERSION >= %d to
work properly, but your PyTorch was compiled
with CUDA_VERSION %d. Please install the correct PyTorch binary
using instructions from https://pytorch.org
"""
Expand All @@ -126,9 +126,12 @@ def _check_capability():
for d in range(device_count()):
capability = get_device_capability(d)
major = capability[0]
minor = capability[1]
name = get_device_name(d)
if capability == (3, 0) or major < 3:
warnings.warn(old_gpu_warn % (d, name, major, capability[1]))
elif CUDA_VERSION <= 9000 and major >= 7 and minor >= 5:
warnings.warn(incorrect_binary_warn % (d, name, 10000, CUDA_VERSION))


def _lazy_call(callable):
Expand Down

0 comments on commit bad6701

Please sign in to comment.