@@ -303,54 +303,13 @@ Enviorment Setup
303
303
304
304
To build natively on aarch64-linux-gnu platform, configure the ``WORKSPACE `` with local available dependencies.
305
305
306
- 1. Disable the rules with ``http_archive `` for x86_64 by commenting the following rules:
307
-
308
- .. code-block :: shell
309
-
310
- # http_archive(
311
- # name = "libtorch",
312
- # build_file = "@//third_party/libtorch:BUILD",
313
- # strip_prefix = "libtorch",
314
- # urls = ["https://download.pytorch.org/libtorch/cu102/libtorch-cxx11-abi-shared-with-deps-1.5.1.zip"],
315
- # sha256 = "cf0691493d05062fe3239cf76773bae4c5124f4b039050dbdd291c652af3ab2a"
316
- # )
317
-
318
- # http_archive(
319
- # name = "libtorch_pre_cxx11_abi",
320
- # build_file = "@//third_party/libtorch:BUILD",
321
- # strip_prefix = "libtorch",
322
- # sha256 = "818977576572eadaf62c80434a25afe44dbaa32ebda3a0919e389dcbe74f8656",
323
- # urls = ["https://download.pytorch.org/libtorch/cu102/libtorch-shared-with-deps-1.5.1.zip"],
324
- # )
325
-
326
- # Download these tarballs manually from the NVIDIA website
327
- # Either place them in the distdir directory in third_party and use the --distdir flag
328
- # or modify the urls to "file:///<PATH TO TARBALL>/<TARBALL NAME>.tar.gz
329
-
330
- # http_archive(
331
- # name = "cudnn",
332
- # urls = ["https://developer.nvidia.com/compute/machine-learning/cudnn/secure/8.0.1.13/10.2_20200626/cudnn-10.2-linux-x64-v8.0.1.13.tgz"],
333
- # build_file = "@//third_party/cudnn/archive:BUILD",
334
- # sha256 = "0c106ec84f199a0fbcf1199010166986da732f9b0907768c9ac5ea5b120772db",
335
- # strip_prefix = "cuda"
336
- # )
337
-
338
- # http_archive(
339
- # name = "tensorrt",
340
- # urls = ["https://developer.nvidia.com/compute/machine-learning/tensorrt/secure/7.1/tars/TensorRT-7.1.3.4.Ubuntu-18.04.x86_64-gnu.cuda-10.2.cudnn8.0.tar.gz"],
341
- # build_file = "@//third_party/tensorrt/archive:BUILD",
342
- # sha256 = "9205bed204e2ae7aafd2e01cce0f21309e281e18d5bfd7172ef8541771539d41",
343
- # strip_prefix = "TensorRT-7.1.3.4"
344
- # )
345
-
346
- NOTE: You may also need to configure the CUDA version to 10.2 by setting the path for the cuda new_local_repository
347
-
306
+ 1. Replace ``WORKSPACE `` with the corresponding WORKSPACE file in ``//toolchains/jp_workspaces ``
348
307
349
308
2. Configure the correct paths to directory roots containing local dependencies in the ``new_local_repository `` rules:
350
309
351
310
NOTE: If you installed PyTorch using a pip package, the correct path is the path to the root of the python torch package.
352
- In the case that you installed with ``sudo pip install `` this will be ``/usr/local/lib/python3.6 /dist-packages/torch ``.
353
- In the case you installed with ``pip install --user `` this will be ``$HOME/.local/lib/python3.6 /site-packages/torch ``.
311
+ In the case that you installed with ``sudo pip install `` this will be ``/usr/local/lib/python3.8 /dist-packages/torch ``.
312
+ In the case you installed with ``pip install --user `` this will be ``$HOME/.local/lib/python3.8 /site-packages/torch ``.
354
313
355
314
In the case you are using NVIDIA compiled pip packages, set the path for both libtorch sources to the same path. This is because unlike
356
315
PyTorch on x86_64, NVIDIA aarch64 PyTorch uses the CXX11-ABI. If you compiled for source using the pre_cxx11_abi and only would like to
@@ -360,27 +319,16 @@ use that library, set the paths to the same path but when you compile make sure
360
319
361
320
new_local_repository(
362
321
name = " libtorch" ,
363
- path = " /usr/local/lib/python3.6 /dist-packages/torch" ,
322
+ path = " /usr/local/lib/python3.8 /dist-packages/torch" ,
364
323
build_file = " third_party/libtorch/BUILD"
365
324
)
366
325
367
326
new_local_repository(
368
327
name = " libtorch_pre_cxx11_abi" ,
369
- path = " /usr/local/lib/python3.6 /dist-packages/torch" ,
328
+ path = " /usr/local/lib/python3.8 /dist-packages/torch" ,
370
329
build_file = " third_party/libtorch/BUILD"
371
330
)
372
331
373
- new_local_repository(
374
- name = " cudnn" ,
375
- path = " /usr/" ,
376
- build_file = " @//third_party/cudnn/local:BUILD"
377
- )
378
-
379
- new_local_repository(
380
- name = " tensorrt" ,
381
- path = " /usr/" ,
382
- build_file = " @//third_party/tensorrt/local:BUILD"
383
- )
384
332
385
333
Compile C++ Library and Compiler CLI
386
334
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -389,19 +337,19 @@ Compile C++ Library and Compiler CLI
389
337
390
338
.. code-block :: shell
391
339
392
- --platforms //toolchains:jetpack_4 .x
340
+ --platforms //toolchains:jetpack_x .x
393
341
394
342
395
343
Compile Torch-TensorRT library using bazel command:
396
344
397
345
.. code-block :: shell
398
346
399
- bazel build //:libtorchtrt --platforms //toolchains:jetpack_4.6
347
+ bazel build //:libtorchtrt --platforms //toolchains:jetpack_5.0
400
348
401
349
Compile Python API
402
350
^^^^^^^^^^^^^^^^^^^^
403
351
404
- NOTE: Due to shifting dependencies locations between Jetpack 4.5 and Jetpack 4.6 there is now a flag for ``setup.py `` which sets the jetpack version (default: 4.6 )
352
+ NOTE: Due to shifting dependencies locations between Jetpack 4.5 and newer Jetpack verisons there is now a flag for ``setup.py `` which sets the jetpack version (default: 5.0 )
405
353
406
354
Compile the Python API using the following command from the ``//py `` directory:
407
355
@@ -411,4 +359,4 @@ Compile the Python API using the following command from the ``//py`` directory:
411
359
412
360
If you have a build of PyTorch that uses Pre-CXX11 ABI drop the ``--use-cxx11-abi `` flag
413
361
414
- If you are building for Jetpack 4.5 add the ``--jetpack-version 4.5 `` flag
362
+ If you are building for Jetpack 4.5 add the ``--jetpack-version 5.0 `` flag
0 commit comments