Skip to content

Commit bad6701

Browse files
ptrblckfacebook-github-bot
authored andcommitted
Add warning for Turing GPUs and CUDA <= 9000 (pytorch#21468)
Summary: Turing GPUs (compute capability 7.5) require CUDA10 to work properly. We've seen some issues for these GPUs using PyTorch binaries with CUDA9 or older: [Discussion Board #1](https://discuss.pytorch.org/t/cudnn-status-execution-failed-error/38575) [Discussion Board #2](https://discuss.pytorch.org/t/cublas-runtime-error-on-gpu-running-but-works-on-cpu/46545/6) Tested on using CUDA9 with an RTX 2080Ti. Pull Request resolved: pytorch#21468 Differential Revision: D15696170 Pulled By: ezyang fbshipit-source-id: ed43f4e4948d3f97ec8e7d7952110cbbfeafef2a
1 parent 63d4bbb commit bad6701

File tree

1 file changed

+5
-2
lines changed

1 file changed

+5
-2
lines changed

torch/cuda/__init__.py

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -110,8 +110,8 @@ def _check_driver():
110110

111111
def _check_capability():
112112
incorrect_binary_warn = """
113-
Found GPU%d %s which requires CUDA_VERSION >= %d for
114-
optimal performance and fast startup time, but your PyTorch was compiled
113+
Found GPU%d %s which requires CUDA_VERSION >= %d to
114+
work properly, but your PyTorch was compiled
115115
with CUDA_VERSION %d. Please install the correct PyTorch binary
116116
using instructions from https://pytorch.org
117117
"""
@@ -126,9 +126,12 @@ def _check_capability():
126126
for d in range(device_count()):
127127
capability = get_device_capability(d)
128128
major = capability[0]
129+
minor = capability[1]
129130
name = get_device_name(d)
130131
if capability == (3, 0) or major < 3:
131132
warnings.warn(old_gpu_warn % (d, name, major, capability[1]))
133+
elif CUDA_VERSION <= 9000 and major >= 7 and minor >= 5:
134+
warnings.warn(incorrect_binary_warn % (d, name, 10000, CUDA_VERSION))
132135

133136

134137
def _lazy_call(callable):

0 commit comments

Comments
 (0)