Skip to content

Bug about native compilation on NVIDIA Jetson AGX #132

Closed
@chiehpower

Description

@chiehpower

🐛 Bug

After I installed the bazel from scratch on AGX device, I directly build it by bazel. However, I got the error like below.

$ bazel build //:libtrtorch --distdir third_party/distdir/aarch64-linux-gnu         

Starting local Bazel server and connecting to it...
INFO: Repository trtorch_py_deps instantiated at:
  no stack (--record_rule_instantiation_callstack not enabled)
Repository rule pip_import defined at:
  /home/nvidia/.cache/bazel/_bazel_nvidia/d7326de2ca76e35cc08b88f9bba7ab43/external/rules_python/python/pip.bzl:51:29: in <toplevel>
ERROR: An error occurred during the fetch of repository 'trtorch_py_deps':
   pip_import failed: Collecting torch==1.5.0 (from -r /home/nvidia/ssd256/github/TRTorch/py/requirements.txt (line 1))
 (  Could not find a version that satisfies the requirement torch==1.5.0 (from -r /home/nvidia/ssd256/github/TRTorch/py/requirements.txt (line 1)) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)
No matching distribution found for torch==1.5.0 (from -r /home/nvidia/ssd256/github/TRTorch/py/requirements.txt (line 1))
)
ERROR: no such package '@trtorch_py_deps//': pip_import failed: Collecting torch==1.5.0 (from -r /home/nvidia/ssd256/github/TRTorch/py/requirements.txt (line 1))
 (  Could not find a version that satisfies the requirement torch==1.5.0 (from -r /home/nvidia/ssd256/github/TRTorch/py/requirements.txt (line 1)) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)
No matching distribution found for torch==1.5.0 (from -r /home/nvidia/ssd256/github/TRTorch/py/requirements.txt (line 1))
)
INFO: Elapsed time: 8.428s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded)

If I used python3 setup.py install, I got the error below:

running install
building libtrtorch
INFO: Build options --compilation_mode, --cxxopt, --define, and 1 more have changed, discarding analysis cache.
INFO: Repository tensorrt instantiated at:
  no stack (--record_rule_instantiation_callstack not enabled)
Repository rule http_archive defined at:
  /home/nvidia/.cache/bazel/_bazel_nvidia/d7326de2ca76e35cc08b88f9bba7ab43/external/bazel_tools/tools/build_defs/repo/http.bzl:336:31: in <toplevel>
WARNING: Download from https://developer.nvidia.com/compute/machine-learning/tensorrt/secure/7.1/tars/TensorRT-7.1.3.4.Ubuntu-18.04.x86_64-gnu.cuda-10.2.cudnn8.0.tar.gz failed: class java.io.IOException GET returned 403 Forbidden
ERROR: An error occurred during the fetch of repository 'tensorrt':
   java.io.IOException: Error downloading [https://developer.nvidia.com/compute/machine-learning/tensorrt/secure/7.1/tars/TensorRT-7.1.3.4.Ubuntu-18.04.x86_64-gnu.cuda-10.2.cudnn8.0.tar.gz] to /home/nvidia/.cache/bazel/_bazel_nvidia/d7326de2ca76e35cc08b88f9bba7ab43/external/tensorrt/TensorRT-7.1.3.4.Ubuntu-18.04.x86_64-gnu.cuda-10.2.cudnn8.0.tar.gz: GET returned 403 Forbidden
INFO: Repository libtorch_pre_cxx11_abi instantiated at:
  no stack (--record_rule_instantiation_callstack not enabled)
Repository rule http_archive defined at:
  /home/nvidia/.cache/bazel/_bazel_nvidia/d7326de2ca76e35cc08b88f9bba7ab43/external/bazel_tools/tools/build_defs/repo/http.bzl:336:31: in <toplevel>
ERROR: /home/nvidia/ssd256/github/TRTorch/core/BUILD:10:11: //core:core depends on @tensorrt//:nvinfer in repository @tensorrt which failed to fetch. no such package '@tensorrt//': java.io.IOException: Error downloading [https://developer.nvidia.com/compute/machine-learning/tensorrt/secure/7.1/tars/TensorRT-7.1.3.4.Ubuntu-18.04.x86_64-gnu.cuda-10.2.cudnn8.0.tar.gz] to /home/nvidia/.cache/bazel/_bazel_nvidia/d7326de2ca76e35cc08b88f9bba7ab43/external/tensorrt/TensorRT-7.1.3.4.Ubuntu-18.04.x86_64-gnu.cuda-10.2.cudnn8.0.tar.gz: GET returned 403 Forbidden
ERROR: Analysis of target '//cpp/api/lib:libtrtorch.so' failed; build aborted: Analysis failed
INFO: Elapsed time: 18.044s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded, 62 targets configured)

Is there any idea about this?

To Reproduce

Steps to reproduce the behavior:

  1. Install bazel from here
  2. Use this command:
bazel build //:libtrtorch --distdir third_party/distdir/aarch64-linux-gnu         

Environment

Build information about the TRTorch compiler can be found by turning on debug messages

  • PyTorch Version: 1.15.0
  • JetPack Version: 4.4
  • How you installed PyTorch: from here
  • Python version: 3.6
  • CUDA version: 10.2
  • GPU models and configuration: AGX jetson device
  • TRT version default is 7.1.0.16 on JetPack 4.4
  • bazel version: 3.4.0

Thank you

BR,
Chieh

Metadata

Metadata

Assignees

Labels

documentationImprovements or additions to documentationplatform: aarch64Bugs regarding the x86_64 builds of TRTorchquestionFurther information is requested

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions