Skip to content

TensorRT 8.6 #1848

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
johnnynunez opened this issue Apr 21, 2023 · 3 comments
Closed

TensorRT 8.6 #1848

johnnynunez opened this issue Apr 21, 2023 · 3 comments
Assignees
Labels
feature request New feature or request

Comments

@johnnynunez
Copy link

johnnynunez commented Apr 21, 2023

When will tensor 8.6 be added? On new GPU's using CUDA 12 does not work with version 8.5
https://github.com/NVIDIA/TensorRT/releases/tag/v8.6.0

@johnnynunez johnnynunez added the feature request New feature or request label Apr 21, 2023
@gs-olive gs-olive self-assigned this Apr 21, 2023
@gs-olive
Copy link
Collaborator

We are working to switch our main branch over to TRT 8.6 now (PR #1852), and our upcoming release 1.4 will be based on TRT 8.6. Switching to CUDA 12 is not yet planned, since our current Torch distribution dependency does not have a CUDA 12-compatible build, but you can build Torch-TensorRT from scratch using your own Torch/CUDA versions by replacing the libtorch, libtorch_pre_cxx11_abi, and cuda sections in the WORKSPACE file, and rebuilding Torch-TensorRT from scratch:

TensorRT/WORKSPACE

Lines 40 to 70 in 1d78f43

# CUDA should be installed on the system locally
new_local_repository(
name = "cuda",
build_file = "@//third_party/cuda:BUILD",
path = "/usr/local/cuda-11.7/",
)
new_local_repository(
name = "cublas",
build_file = "@//third_party/cublas:BUILD",
path = "/usr",
)
#############################################################################################################
# Tarballs and fetched dependencies (default - use in cases when building from precompiled bin and tarballs)
#############################################################################################################
http_archive(
name = "libtorch",
build_file = "@//third_party/libtorch:BUILD",
sha256 = "7c4b8754830fef23ec19c5eaf414794cee9597b435df055f5c1d0471d3e81568",
strip_prefix = "libtorch",
urls = ["https://download.pytorch.org/libtorch/nightly/cu117/libtorch-cxx11-abi-shared-with-deps-2.1.0.dev20230314%2Bcu117.zip"],
)
http_archive(
name = "libtorch_pre_cxx11_abi",
build_file = "@//third_party/libtorch:BUILD",
sha256 = "f1e64a75dd12d0ba4c8c1f61947299e0a9c50684dff64f0cfbf355aa7a13e8cf",
strip_prefix = "libtorch",
urls = ["https://download.pytorch.org/libtorch/nightly/cu117/libtorch-shared-with-deps-2.1.0.dev20230314%2Bcu117.zip"],
)

@johnnynunez
Copy link
Author

We are working to switch our main branch over to TRT 8.6 now (PR #1852), and our upcoming release 1.4 will be based on TRT 8.6. Switching to CUDA 12 is not yet planned, since our current Torch distribution dependency does not have a CUDA 12-compatible build, but you can build Torch-TensorRT from scratch using your own Torch/CUDA versions by replacing the libtorch, libtorch_pre_cxx11_abi, and cuda sections in the WORKSPACE file, and rebuilding Torch-TensorRT from scratch:

TensorRT/WORKSPACE

Lines 40 to 70 in 1d78f43

# CUDA should be installed on the system locally
new_local_repository(
name = "cuda",
build_file = "@//third_party/cuda:BUILD",
path = "/usr/local/cuda-11.7/",
)
new_local_repository(
name = "cublas",
build_file = "@//third_party/cublas:BUILD",
path = "/usr",
)
#############################################################################################################
# Tarballs and fetched dependencies (default - use in cases when building from precompiled bin and tarballs)
#############################################################################################################
http_archive(
name = "libtorch",
build_file = "@//third_party/libtorch:BUILD",
sha256 = "7c4b8754830fef23ec19c5eaf414794cee9597b435df055f5c1d0471d3e81568",
strip_prefix = "libtorch",
urls = ["https://download.pytorch.org/libtorch/nightly/cu117/libtorch-cxx11-abi-shared-with-deps-2.1.0.dev20230314%2Bcu117.zip"],
)
http_archive(
name = "libtorch_pre_cxx11_abi",
build_file = "@//third_party/libtorch:BUILD",
sha256 = "f1e64a75dd12d0ba4c8c1f61947299e0a9c50684dff64f0cfbf355aa7a13e8cf",
strip_prefix = "libtorch",
urls = ["https://download.pytorch.org/libtorch/nightly/cu117/libtorch-shared-with-deps-2.1.0.dev20230314%2Bcu117.zip"],
)

Yes I have seen that they are working on CUDA 12 on the main branch. In fact they will remove cuda 11.7 to add CUDA 12.1.
pytorch/pytorch#98832
pytorch/pytorch#98986
pytorch/pytorch#98398

@narendasan
Copy link
Collaborator

We support TRT 8.6 so closing this issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants