Skip to content

Jetson workspace #1280

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Aug 17, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
70 changes: 9 additions & 61 deletions docsrc/tutorials/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -303,54 +303,13 @@ Enviorment Setup

To build natively on aarch64-linux-gnu platform, configure the ``WORKSPACE`` with local available dependencies.

1. Disable the rules with ``http_archive`` for x86_64 by commenting the following rules:

.. code-block:: shell

#http_archive(
# name = "libtorch",
# build_file = "@//third_party/libtorch:BUILD",
# strip_prefix = "libtorch",
# urls = ["https://download.pytorch.org/libtorch/cu102/libtorch-cxx11-abi-shared-with-deps-1.5.1.zip"],
# sha256 = "cf0691493d05062fe3239cf76773bae4c5124f4b039050dbdd291c652af3ab2a"
#)

#http_archive(
# name = "libtorch_pre_cxx11_abi",
# build_file = "@//third_party/libtorch:BUILD",
# strip_prefix = "libtorch",
# sha256 = "818977576572eadaf62c80434a25afe44dbaa32ebda3a0919e389dcbe74f8656",
# urls = ["https://download.pytorch.org/libtorch/cu102/libtorch-shared-with-deps-1.5.1.zip"],
#)

# Download these tarballs manually from the NVIDIA website
# Either place them in the distdir directory in third_party and use the --distdir flag
# or modify the urls to "file:///<PATH TO TARBALL>/<TARBALL NAME>.tar.gz

#http_archive(
# name = "cudnn",
# urls = ["https://developer.nvidia.com/compute/machine-learning/cudnn/secure/8.0.1.13/10.2_20200626/cudnn-10.2-linux-x64-v8.0.1.13.tgz"],
# build_file = "@//third_party/cudnn/archive:BUILD",
# sha256 = "0c106ec84f199a0fbcf1199010166986da732f9b0907768c9ac5ea5b120772db",
# strip_prefix = "cuda"
#)

#http_archive(
# name = "tensorrt",
# urls = ["https://developer.nvidia.com/compute/machine-learning/tensorrt/secure/7.1/tars/TensorRT-7.1.3.4.Ubuntu-18.04.x86_64-gnu.cuda-10.2.cudnn8.0.tar.gz"],
# build_file = "@//third_party/tensorrt/archive:BUILD",
# sha256 = "9205bed204e2ae7aafd2e01cce0f21309e281e18d5bfd7172ef8541771539d41",
# strip_prefix = "TensorRT-7.1.3.4"
#)

NOTE: You may also need to configure the CUDA version to 10.2 by setting the path for the cuda new_local_repository

1. Replace ``WORKSPACE`` with the corresponding WORKSPACE file in ``//toolchains/jp_workspaces``

2. Configure the correct paths to directory roots containing local dependencies in the ``new_local_repository`` rules:

NOTE: If you installed PyTorch using a pip package, the correct path is the path to the root of the python torch package.
In the case that you installed with ``sudo pip install`` this will be ``/usr/local/lib/python3.6/dist-packages/torch``.
In the case you installed with ``pip install --user`` this will be ``$HOME/.local/lib/python3.6/site-packages/torch``.
In the case that you installed with ``sudo pip install`` this will be ``/usr/local/lib/python3.8/dist-packages/torch``.
In the case you installed with ``pip install --user`` this will be ``$HOME/.local/lib/python3.8/site-packages/torch``.

In the case you are using NVIDIA compiled pip packages, set the path for both libtorch sources to the same path. This is because unlike
PyTorch on x86_64, NVIDIA aarch64 PyTorch uses the CXX11-ABI. If you compiled for source using the pre_cxx11_abi and only would like to
Expand All @@ -360,27 +319,16 @@ use that library, set the paths to the same path but when you compile make sure

new_local_repository(
name = "libtorch",
path = "/usr/local/lib/python3.6/dist-packages/torch",
path = "/usr/local/lib/python3.8/dist-packages/torch",
build_file = "third_party/libtorch/BUILD"
)

new_local_repository(
name = "libtorch_pre_cxx11_abi",
path = "/usr/local/lib/python3.6/dist-packages/torch",
path = "/usr/local/lib/python3.8/dist-packages/torch",
build_file = "third_party/libtorch/BUILD"
)

new_local_repository(
name = "cudnn",
path = "/usr/",
build_file = "@//third_party/cudnn/local:BUILD"
)

new_local_repository(
name = "tensorrt",
path = "/usr/",
build_file = "@//third_party/tensorrt/local:BUILD"
)

Compile C++ Library and Compiler CLI
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Expand All @@ -389,19 +337,19 @@ Compile C++ Library and Compiler CLI

.. code-block:: shell

--platforms //toolchains:jetpack_4.x
--platforms //toolchains:jetpack_x.x


Compile Torch-TensorRT library using bazel command:

.. code-block:: shell

bazel build //:libtorchtrt --platforms //toolchains:jetpack_4.6
bazel build //:libtorchtrt --platforms //toolchains:jetpack_5.0

Compile Python API
^^^^^^^^^^^^^^^^^^^^

NOTE: Due to shifting dependencies locations between Jetpack 4.5 and Jetpack 4.6 there is now a flag for ``setup.py`` which sets the jetpack version (default: 4.6)
NOTE: Due to shifting dependencies locations between Jetpack 4.5 and newer Jetpack verisons there is now a flag for ``setup.py`` which sets the jetpack version (default: 5.0)

Compile the Python API using the following command from the ``//py`` directory:

Expand All @@ -411,4 +359,4 @@ Compile the Python API using the following command from the ``//py`` directory:

If you have a build of PyTorch that uses Pre-CXX11 ABI drop the ``--use-cxx11-abi`` flag

If you are building for Jetpack 4.5 add the ``--jetpack-version 4.5`` flag
If you are building for Jetpack 4.5 add the ``--jetpack-version 5.0`` flag
17 changes: 13 additions & 4 deletions py/setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -72,12 +72,18 @@ def get_git_revision_short_hash() -> str:
elif version == "4.6":
JETPACK_VERSION = "4.6"
elif version == "5.0":
JETPACK_VERSION = "4.6"
JETPACK_VERSION = "5.0"

if not JETPACK_VERSION:
warnings.warn(
"Assuming jetpack version to be 4.6 or greater, if not use the --jetpack-version option"
"Assuming jetpack version to be 5.0, if not use the --jetpack-version option"
)
JETPACK_VERSION = "5.0"

if not CXX11_ABI:
warnings.warn(
"Jetson platform detected but did not see --use-cxx11-abi option, if using a pytorch distribution provided by NVIDIA include this flag"
)
JETPACK_VERSION = "4.6"


def which(program):
Expand Down Expand Up @@ -128,7 +134,10 @@ def build_libtorchtrt_pre_cxx11_abi(develop=True, use_dist_dir=True, cxx11_abi=F
print("Jetpack version: 4.5")
elif JETPACK_VERSION == "4.6":
cmd.append("--platforms=//toolchains:jetpack_4.6")
print("Jetpack version: >=4.6")
print("Jetpack version: 4.6")
elif JETPACK_VERSION == "5.0":
cmd.append("--platforms=//toolchains:jetpack_5.0")
print("Jetpack version: 5.0")

if CI_RELEASE:
cmd.append("--platforms=//toolchains:ci_rhel_x86_64_linux")
Expand Down
9 changes: 9 additions & 0 deletions toolchains/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,15 @@ platform(
],
)

platform(
name = "jetpack_5.0",
constraint_values = [
"@platforms//os:linux",
"@platforms//cpu:aarch64",
"@//toolchains/jetpack:4.6",
],
)

platform(
name = "ci_rhel_x86_64_linux",
constraint_values = [
Expand Down
96 changes: 96 additions & 0 deletions toolchains/jp_workspaces/WORKSPACE.jp50
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
workspace(name = "Torch-TensorRT")

load("@bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")

http_archive(
name = "rules_python",
sha256 = "778197e26c5fbeb07ac2a2c5ae405b30f6cb7ad1f5510ea6fdac03bded96cc6f",
url = "https://github.com/bazelbuild/rules_python/releases/download/0.2.0/rules_python-0.2.0.tar.gz",
)

load("@rules_python//python:pip.bzl", "pip_install")

http_archive(
name = "rules_pkg",
sha256 = "038f1caa773a7e35b3663865ffb003169c6a71dc995e39bf4815792f385d837d",
urls = [
"https://mirror.bazel.build/github.com/bazelbuild/rules_pkg/releases/download/0.4.0/rules_pkg-0.4.0.tar.gz",
"https://github.com/bazelbuild/rules_pkg/releases/download/0.4.0/rules_pkg-0.4.0.tar.gz",
],
)

load("@rules_pkg//:deps.bzl", "rules_pkg_dependencies")

rules_pkg_dependencies()

git_repository(
name = "googletest",
commit = "703bd9caab50b139428cea1aaff9974ebee5742e",
remote = "https://github.com/google/googletest",
shallow_since = "1570114335 -0400",
)

# External dependency for torch_tensorrt if you already have precompiled binaries.
local_repository(
name = "torch_tensorrt",
path = "/opt/conda/lib/python3.8/site-packages/torch_tensorrt",
)

# CUDA should be installed on the system locally
new_local_repository(
name = "cuda",
build_file = "@//third_party/cuda:BUILD",
path = "/usr/local/cuda-11.4/",
)

new_local_repository(
name = "cublas",
build_file = "@//third_party/cublas:BUILD",
path = "/usr",
)

####################################################################################
# Locally installed dependencies (use in cases of custom dependencies or aarch64)
####################################################################################

# NOTE: In the case you are using just the pre-cxx11-abi path or just the cxx11 abi path
# with your local libtorch, just point deps at the same path to satisfy bazel.

# NOTE: NVIDIA's aarch64 PyTorch (python) wheel file uses the CXX11 ABI unlike PyTorch's standard
# x86_64 python distribution. If using NVIDIA's version just point to the root of the package
# for both versions here and do not use --config=pre-cxx11-abi

new_local_repository(
name = "libtorch",
path = "/usr/local/lib/python3.8/dist-packages/torch",
build_file = "third_party/libtorch/BUILD"
)

# NOTE: Unused on aarch64-jetson with NVIDIA provided PyTorch distribu†ion
new_local_repository(
name = "libtorch_pre_cxx11_abi",
path = "/usr/local/lib/python3.8/dist-packages/torch",
build_file = "third_party/libtorch/BUILD"
)

new_local_repository(
name = "cudnn",
path = "/usr/",
build_file = "@//third_party/cudnn/local:BUILD"
)

new_local_repository(
name = "tensorrt",
path = "/usr/",
build_file = "@//third_party/tensorrt/local:BUILD"
)

#########################################################################
# Development Dependencies (optional - comment out on aarch64)
#########################################################################

pip_install(
name = "devtools_deps",
requirements = "//:requirements-dev.txt",
)