Skip to content

Cherry-pick changes from main into release/2.1 #2302

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 82 commits into from
Oct 10, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
82 commits
Select commit Hold shift + click to select a range
6836773
docs: [Automated] Regenerating documenation for 7062730
Sep 7, 2023
6f7cca6
chore: enabling TS FE testing
narendasan Sep 2, 2023
0f5d6d4
Merge pull request #2283 from pytorch/gha-ci-infra
narendasan Sep 8, 2023
f8bcbdd
docs: [Automated] Regenerating documenation for 0f5d6d4
Sep 8, 2023
6f0e265
Update _Input.py (#2293)
phyboy Sep 8, 2023
ba2a300
docs: [Automated] Regenerating documenation for 6f0e265
Sep 8, 2023
40f8064
feat: support many elementwise dynamo converters (#2263)
zewenli98 Sep 8, 2023
c377c48
docs: [Automated] Regenerating documenation for 40f8064
Sep 8, 2023
8c92918
feat: support linear (fully connected layer) dynamo converter (#2253)
zewenli98 Sep 9, 2023
3b0e6c0
docs: [Automated] Regenerating documenation for 8c92918
Sep 9, 2023
b873f28
WAR: Disabling ViT tests until exporting with py311 is fixed
narendasan Sep 9, 2023
ea5a289
Merge pull request #2305 from pytorch/disable_vit
narendasan Sep 9, 2023
7390e7d
neg converter correction (#2307)
apbose Sep 11, 2023
7a4288e
docs: [Automated] Regenerating documenation for 7390e7d
Sep 11, 2023
100191b
feat: Add preliminary support for freezing tensors in Dynamo (#2128)
gs-olive Sep 12, 2023
783a760
docs: [Automated] Regenerating documenation for 100191b
Sep 12, 2023
b9871c5
fix: Wrap import of ConstantFold utilities (#2312)
gs-olive Sep 12, 2023
044d4d6
docs: [Automated] Regenerating documenation for b9871c5
Sep 12, 2023
e0a7525
fix: Move aten.neg test case (#2310)
gs-olive Sep 12, 2023
a03a585
docs: [Automated] Regenerating documenation for e0a7525
Sep 12, 2023
10917bf
small fix: Packaging version switch (#2315)
gs-olive Sep 12, 2023
33c0673
docs: [Automated] Regenerating documenation for 10917bf
Sep 12, 2023
bcf7641
fix: Register tensorrt backend name (#2311)
gs-olive Sep 12, 2023
43eb4bb
docs: [Automated] Regenerating documenation for bcf7641
Sep 12, 2023
c1f130a
feat: Transition export workflows to use torch._export APIs (#2195)
peri044 Sep 18, 2023
8ebb599
docs: [Automated] Regenerating documenation for c1f130a
Sep 18, 2023
ac007ce
fix: Add special cases for `clone` and `to_copy` where input of graph…
gs-olive Sep 20, 2023
b50290d
fix: Raise error when registering Packet-keyed converter (#2285)
gs-olive Sep 20, 2023
0caac4f
docs: [Automated] Regenerating documenation for b50290d
Sep 20, 2023
c875c39
FX converter documentation (#2039)
apbose Sep 21, 2023
19aabdd
aten::split converter (#2232)
apbose Sep 21, 2023
0a939df
DLFW changes (#2281)
apbose Sep 21, 2023
ff4d940
feat: Add ATen lowering pass system (#2280)
gs-olive Sep 22, 2023
65feab1
fix: Support non -1 end idx and <0 start idx in aten::flatten convert…
mfeliz-cruise Sep 22, 2023
e6e8099
docs: [Automated] Regenerating documenation for 65feab1
Sep 22, 2023
3c4c2fe
support for torch.ops.aten.erf.default op
bowang007 Aug 2, 2023
670d2be
feat: support Dynamo converter for torch.ops.aten.erf.default op
bowang007 Sep 22, 2023
ecdc040
fix: Update Torchvision version to address dependency resolution issu…
gs-olive Sep 25, 2023
7daa112
fix: Remove input aliasing of builtin ops (#2276)
gs-olive Sep 26, 2023
b2aa255
docs: [Automated] Regenerating documenation for 7daa112
Sep 26, 2023
1033dff
fix: Allow low rank inputs in Python Runtime (#2282)
gs-olive Sep 27, 2023
76de80d
docs: [Automated] Regenerating documenation for 1033dff
Sep 27, 2023
338e542
fix: Address multi-GPU issue in engine deserialize (#2325)
gs-olive Sep 27, 2023
117161a
docs: [Automated] Regenerating documenation for 338e542
Sep 27, 2023
251405d
feat: support deconv (1d, 2d, and Nd) dynamo converter (#2337)
zewenli98 Sep 27, 2023
a2a983b
docs: [Automated] Regenerating documenation for 251405d
Sep 27, 2023
bece720
Update usage of PyTorch's custom op API (#2193)
zou3519 Sep 28, 2023
78f2721
docs: [Automated] Regenerating documenation for bece720
Sep 28, 2023
765933a
feat: support bmm converter in dynamo (#2248)
bowang007 Sep 28, 2023
0d402fb
docs: [Automated] Regenerating documenation for 765933a
Sep 28, 2023
891c2ef
feat: support 1D, 2D, and 3D avg and max pooling dynamo converters (#…
zewenli98 Sep 29, 2023
253bbd1
docs: [Automated] Regenerating documenation for 891c2ef
Sep 29, 2023
46cfa35
fix: Add support for negative dimensions in reduce (#2347)
gs-olive Sep 29, 2023
5de208f
docs: [Automated] Regenerating documenation for 46cfa35
Sep 29, 2023
42e514b
feat: Add tensor type enforcement for converters (#2324)
gs-olive Sep 29, 2023
ab1d7d4
docs: [Automated] Regenerating documenation for 42e514b
Sep 29, 2023
558ae7c
fix: Issue in TS dimension-squeeze utility (#2336)
gs-olive Sep 29, 2023
ef07bea
docs: [Automated] Regenerating documenation for 558ae7c
Sep 29, 2023
8ebf24d
perf: Add lowering passes to improve TRT runtime on SD (#2351)
gs-olive Sep 29, 2023
8c25baf
docs: [Automated] Regenerating documenation for 8ebf24d
Sep 29, 2023
6571252
feat: Implement Dynamic shapes + fallback support for export path (#2…
peri044 Oct 2, 2023
a7f9055
docs: [Automated] Regenerating documenation for 6571252
Oct 2, 2023
4f72425
feat: Add maxpool lowering passes and experimental folder in Dynamo (…
gs-olive Oct 3, 2023
5bb8cb0
docs: [Automated] Regenerating documenation for 4f72425
Oct 4, 2023
e432bf2
Aten::Index converter (#2277)
apbose Oct 4, 2023
7e5d05f
docs: [Automated] Regenerating documenation for e432bf2
Oct 4, 2023
7b21322
feat: Implement support for exporting Torch-TensorRT compiled graphs …
peri044 Oct 4, 2023
4cffd6e
docs: [Automated] Regenerating documenation for 7b21322
Oct 4, 2023
22cf701
chore: Switch converter tests to generate standalone ops using fx.sym…
peri044 Oct 5, 2023
16c670a
docs: [Automated] Regenerating documenation for 22cf701
Oct 5, 2023
c61d97e
fix/feat: Add and repair multiple converters for SD + other models (#…
gs-olive Oct 6, 2023
6d59a14
docs: [Automated] Regenerating documenation for c61d97e
Oct 6, 2023
d375d10
feat: support flatten and reshape via shuffle_layer (#2354)
zewenli98 Oct 6, 2023
65e8ec7
docs: [Automated] Regenerating documenation for d375d10
Oct 6, 2023
80bbd8b
feat: support prod, max, min, and mean via reduce layer (#2355)
zewenli98 Oct 6, 2023
18dcdd0
minor fix: Update `get_ir` prefixes (#2369)
gs-olive Oct 6, 2023
83176fe
Dynamo converter cat (#2343)
apbose Oct 6, 2023
a646e59
fix: Repair issue in Torch Constant Folder (#2375)
gs-olive Oct 9, 2023
0e4c5d8
docs: [Automated] Regenerating documenation for a646e59
Oct 9, 2023
50ab2c1
fix: Repair `aten.where` with Numpy + Broadcast (#2372)
gs-olive Oct 10, 2023
cb2aee0
docs: [Automated] Regenerating documenation for 50ab2c1
Oct 10, 2023
adf4e32
Merge branch 'release/2.1' into 2.1-staging
narendasan Oct 10, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
19 changes: 17 additions & 2 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -802,7 +802,7 @@ commands:
- store_artifacts:
path: /tmp/testlogs

test-dynamo-models_torch_export:
test-dynamo-models_export:
description: "Test the Dynamo models via torch_export path"
steps:
- run:
Expand All @@ -818,6 +818,20 @@ commands:
- store_artifacts:
path: /tmp/testlogs

test-dynamo-export_serde:
description: "Test the export serialize/deserialize functionality for Dynamo models"
steps:
- run:
name: Run Dynamo models and test export serde with TRT compiled modules
command: |
cd tests/py/dynamo/models
pytest test_export_serde.py --junitxml=/tmp/artifacts/test_results/dynamo/backend/test_results.xml --ir dynamo

- store_test_results:
path: /tmp/artifacts
- store_artifacts:
path: /tmp/testlogs

test-dynamo-converters:
description: "Test the Dynamo aten converters"
steps:
Expand Down Expand Up @@ -1122,7 +1136,8 @@ jobs:
- test-dynamo-backend
- test-dynamo-shared_utilities
- test-dynamo-models_torch_compile
- test-dynamo-models_torch_export
- test-dynamo-models_export
- test-dynamo-export_serde

package-x86_64-linux:
parameters:
Expand Down
69 changes: 36 additions & 33 deletions .github/workflows/build-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -54,39 +54,40 @@ jobs:
AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}

# tests-py-torchscript-fe:
# name: Test torchscript frontend [Python]
# needs: [generate-matrix, build]
# strategy:
# fail-fast: false
# matrix:
# include:
# - repository: pytorch/tensorrt
# package-name: torch_tensorrt
# pre-script: packaging/pre_build_script.sh
# uses: pytorch/tensorrt/.github/workflows/linux-test.yml@main
# with:
# job-name: tests-py-torchscript-fe
# repository: "pytorch/tensorrt"
# ref: ""
# test-infra-repository: pytorch/test-infra
# test-infra-ref: main
# build-matrix: ${{ needs.generate-matrix.outputs.matrix }}
# pre-script: ${{ matrix.pre-script }}
# script: |
# export USE_HOST_DEPS=1
# pushd .
# cd tests/modules
# ${CONDA_RUN} python -m pip install -r requirements.txt
# ${CONDA_RUN} python hub.py
# popd
# pushd .
# cd tests/py/ts
# ${CONDA_RUN} python -m pip install --pre pytest timm transformers parameterized expecttest --use-deprecated=legacy-resolver
# ${CONDA_RUN} python -m pytest --junitxml=${RUNNER_TEST_RESULTS_DIR}/ts_api_test_results.xml api/
# ${CONDA_RUN} python -m pytest --junitxml=${RUNNER_TEST_RESULTS_DIR}/ts_models_test_results.xml models/
# ${CONDA_RUN} python -m pytest --junitxml=${RUNNER_TEST_RESULTS_DIR}/ts_integrations_test_results.xml integrations/
# popd
tests-py-torchscript-fe:
name: Test torchscript frontend [Python]
needs: [generate-matrix, build]
strategy:
fail-fast: false
matrix:
include:
- repository: pytorch/tensorrt
package-name: torch_tensorrt
pre-script: packaging/pre_build_script.sh
uses: pytorch/tensorrt/.github/workflows/linux-test.yml@main
with:
job-name: tests-py-torchscript-fe
repository: "pytorch/tensorrt"
ref: ""
test-infra-repository: pytorch/test-infra
test-infra-ref: main
build-matrix: ${{ needs.generate-matrix.outputs.matrix }}
pre-script: ${{ matrix.pre-script }}
script: |
export USE_HOST_DEPS=1
export LD_LIBRARY_PATH=/usr/lib64:$LD_LIBRARY_PATH
pushd .
cd tests/modules
${CONDA_RUN} python -m pip install --pre -r requirements.txt --use-deprecated=legacy-resolver
${CONDA_RUN} python hub.py
popd
pushd .
cd tests/py/ts
${CONDA_RUN} python -m pip install --pre pytest timm transformers parameterized expecttest --use-deprecated=legacy-resolver
${CONDA_RUN} python -m pytest --junitxml=${RUNNER_TEST_RESULTS_DIR}/ts_api_test_results.xml api/
${CONDA_RUN} python -m pytest --junitxml=${RUNNER_TEST_RESULTS_DIR}/ts_models_test_results.xml models/
${CONDA_RUN} python -m pytest --junitxml=${RUNNER_TEST_RESULTS_DIR}/ts_integrations_test_results.xml integrations/
popd

tests-py-dynamo-converters:
name: Test dynamo converters [Python]
Expand Down Expand Up @@ -140,6 +141,8 @@ jobs:
cd tests/py/dynamo
${CONDA_RUN} python -m pip install --pre pytest timm transformers parameterized expecttest --use-deprecated=legacy-resolver
${CONDA_RUN} python -m pytest --junitxml=${RUNNER_TEST_RESULTS_DIR}/dynamo_fe_test_results.xml --ir dynamo models/test_models_export.py
${CONDA_RUN} python -m pytest --junitxml=${RUNNER_TEST_RESULTS_DIR}/export_serde_test_results.xml --ir dynamo models/test_export_serde.py
${CONDA_RUN} python -m pytest --junitxml=${RUNNER_TEST_RESULTS_DIR}/dyn_models_export.xml --ir dynamo models/test_dyn_models.py
popd

tests-py-torch-compile-be:
Expand Down
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ repos:
rev: 'v1.4.1'
hooks:
- id: mypy
exclude: "^py/torch_tensorrt/fx|^examples|^tests|^tools|^docs|noxfile.py|setup.py|versions.py"
exclude: "^py/torch_tensorrt/fx|^examples|^tests|^py/torch_tensorrt/dynamo/_experimental|^tools|^docs|noxfile.py|setup.py|versions.py"
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.0.278
Expand Down
7 changes: 6 additions & 1 deletion core/conversion/converters/impl/shuffle.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,12 @@ static auto shuffle_registrations TORCHTRT_UNUSED =
auto in_shape = util::toVec(in->getDimensions());
std::vector<int64_t> out_shape;
if (ctx->input_is_dynamic) {
end_dim = (end_dim == -1) ? in_shape.size() - 1 : end_dim;
if (start_dim < 0) {
start_dim = start_dim + in_shape.size();
}
if (end_dim < 0) {
end_dim = end_dim + in_shape.size();
}
int nbDynamicFlattenedDims = 0;
int nbDynamicUnflattenedDims = 0;
for (int i = 0; i < (int)in_shape.size(); i++) {
Expand Down
6 changes: 3 additions & 3 deletions core/runtime/execute_engine.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -43,8 +43,8 @@ bool is_switch_required(const RTDevice& curr_device, const RTDevice& engine_devi
return false;
}

RTDevice select_rt_device(const RTDevice& engine_device) {
auto new_target_device_opt = get_most_compatible_device(engine_device);
RTDevice select_rt_device(const RTDevice& engine_device, const RTDevice& curr_device) {
auto new_target_device_opt = get_most_compatible_device(engine_device, curr_device);

// REVIEW: THIS DOES NOT LIST DLA PROBABLY, WHICH WE SHOULD
// TODO: I think this logic could be way simpler at execution time since if the tensors arent on the right
Expand Down Expand Up @@ -89,7 +89,7 @@ std::vector<at::Tensor> execute_engine(std::vector<at::Tensor> inputs, c10::intr

if (is_switch_required(curr_device, compiled_engine->device_info)) {
// Scan through available CUDA devices and set the CUDA device context correctly
RTDevice device = select_rt_device(compiled_engine->device_info);
RTDevice device = select_rt_device(compiled_engine->device_info, curr_device);
set_rt_device(device);

// Target device is new device
Expand Down
27 changes: 22 additions & 5 deletions core/runtime/runtime.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,16 @@ namespace torch_tensorrt {
namespace core {
namespace runtime {

c10::optional<RTDevice> get_most_compatible_device(const RTDevice& target_device) {
c10::optional<RTDevice> get_most_compatible_device(const RTDevice& target_device, const RTDevice& curr_device) {
LOG_DEBUG("Target Device: " << target_device);
auto device_options = find_compatible_devices(target_device);
RTDevice current_device;
if (current_device.id == -1) {
current_device = get_current_device();
} else {
current_device = curr_device;
}

if (device_options.size() == 0) {
return {};
} else if (device_options.size() == 1) {
Expand All @@ -21,10 +28,20 @@ c10::optional<RTDevice> get_most_compatible_device(const RTDevice& target_device
dev_list << "[" << std::endl;
for (auto device : device_options) {
dev_list << " " << device << ',' << std::endl;
if (device.device_name == target_device.device_name && best_match.device_name != target_device.device_name) {
best_match = device;
} else if (device.device_name == target_device.device_name && best_match.device_name == target_device.device_name) {
if (device.id == target_device.id && best_match.id != target_device.id) {
if (device.device_name == target_device.device_name) {
// First priority is selecting a candidate which agrees with the current device ID
// If such a device is found, we can select it and break out of the loop
if (device.id == current_device.id && best_match.id != current_device.id) {
best_match = device;
break;
}
// Second priority is selecting a candidate which agrees with the target device ID
// At deserialization time, the current device and target device may not agree
else if (device.id == target_device.id && best_match.id != target_device.id) {
best_match = device;
}
// If no such GPU ID is found, select the first available candidate GPU
else if (best_match.device_name != target_device.device_name) {
best_match = device;
}
}
Expand Down
4 changes: 3 additions & 1 deletion core/runtime/runtime.h
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,9 @@ typedef enum {
SERIALIZATION_LEN, // NEVER USED FOR DATA, USED TO DETERMINE LENGTH OF SERIALIZED INFO
} SerializedInfoIndex;

c10::optional<RTDevice> get_most_compatible_device(const RTDevice& target_device);
c10::optional<RTDevice> get_most_compatible_device(
const RTDevice& target_device,
const RTDevice& curr_device = RTDevice());
std::vector<RTDevice> find_compatible_devices(const RTDevice& target_device);

std::vector<at::Tensor> execute_engine(std::vector<at::Tensor> inputs, c10::intrusive_ptr<TRTEngine> compiled_engine);
Expand Down
2 changes: 1 addition & 1 deletion core/util/trt_util.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,7 @@ nvinfer1::Dims squeezeDims(const nvinfer1::Dims& d, int pos, bool use_zeros, boo
// Replace all instances of -1, indicating dynamic dimension
// with 0, indicating copy the dimension from another tensor
// (Generally used for reshape operations)
if (use_zeros && d.d[i] == -1) {
if (use_zeros && d.d[i] == -1 && i < pos) {
dims.d[j] = 0;
// If zeros already exist in the dimensions (empty tensor),
// Replace all instances of 0, indicating empty dimension
Expand Down
2 changes: 2 additions & 0 deletions cpp/include/torch_tensorrt/torch_tensorrt.h
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,8 @@ class DataType {
enum Value : int8_t {
/// INT64
kLong,
/// FP64
kDouble,
/// FP32
kFloat,
/// FP16
Expand Down
8 changes: 7 additions & 1 deletion cpp/src/types.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -97,6 +97,8 @@ at::ScalarType toAtenDataType(DataType value) {
return at::kInt;
case DataType::kLong:
return at::kLong;
case DataType::kDouble:
return at::kDouble;
case DataType::kBool:
return at::kBool;
case DataType::kFloat:
Expand All @@ -119,7 +121,8 @@ nvinfer1::TensorFormat toTRTTensorFormat(TensorFormat value) {

DataType::DataType(c10::ScalarType t) {
TORCHTRT_CHECK(
t == at::kHalf || t == at::kFloat || t == at::kChar || t == at::kLong || t == at::kInt || t == at::kBool,
t == at::kHalf || t == at::kFloat || t == at::kChar || t == at::kLong || t == at::kDouble || t == at::kInt ||
t == at::kBool,
"Data type is unsupported (" << t << ")");
switch (t) {
case at::kHalf:
Expand All @@ -134,6 +137,9 @@ DataType::DataType(c10::ScalarType t) {
case at::kLong:
value = DataType::kLong;
break;
case at::kDouble:
value = DataType::kDouble;
break;
case at::kBool:
value = DataType::kBool;
break;
Expand Down
38 changes: 19 additions & 19 deletions docker/WORKSPACE.ngc
Original file line number Diff line number Diff line change
Expand Up @@ -9,24 +9,28 @@ http_archive(
sha256 = "778197e26c5fbeb07ac2a2c5ae405b30f6cb7ad1f5510ea6fdac03bded96cc6f",
)

load("@rules_python//python:pip.bzl", "pip_install")
load("@rules_python//python:repositories.bzl", "py_repositories")

py_repositories()

http_archive(
name = "rules_pkg",
sha256 = "8f9ee2dc10c1ae514ee599a8b42ed99fa262b757058f65ad3c384289ff70c4b8",
urls = [
"https://mirror.bazel.build/github.com/bazelbuild/rules_pkg/releases/download/0.4.0/rules_pkg-0.4.0.tar.gz",
"https://github.com/bazelbuild/rules_pkg/releases/download/0.4.0/rules_pkg-0.4.0.tar.gz",
"https://mirror.bazel.build/github.com/bazelbuild/rules_pkg/releases/download/0.9.1/rules_pkg-0.9.1.tar.gz",
"https://github.com/bazelbuild/rules_pkg/releases/download/0.9.1/rules_pkg-0.9.1.tar.gz",
],
sha256 = "038f1caa773a7e35b3663865ffb003169c6a71dc995e39bf4815792f385d837d",
)

load("@rules_pkg//:deps.bzl", "rules_pkg_dependencies")

rules_pkg_dependencies()

git_repository(
http_archive(
name = "googletest",
remote = "https://github.com/google/googletest",
commit = "703bd9caab50b139428cea1aaff9974ebee5742e",
shallow_since = "1570114335 -0400"
sha256 = "755f9a39bc7205f5a0c428e920ddad092c33c8a1b46997def3f1d4a82aded6e1",
strip_prefix = "googletest-5ab508a01f9eb089207ee87fd547d290da39d015",
urls = ["https://github.com/google/googletest/archive/5ab508a01f9eb089207ee87fd547d290da39d015.zip"],
)

# External dependency for torch_tensorrt if you already have precompiled binaries.
Expand Down Expand Up @@ -80,17 +84,13 @@ new_local_repository(
#########################################################################
# Testing Dependencies (optional - comment out on aarch64)
#########################################################################
pip_install(
name = "torch_tensorrt_py_deps",
requirements = "//py:requirements.txt",
)
load("@rules_python//python:pip.bzl", "pip_parse")

pip_install(
name = "py_test_deps",
requirements = "//tests/py:requirements.txt",
pip_parse(
name = "devtools_deps",
requirements_lock = "//:requirements-dev.txt",
)

pip_install(
name = "pylinter_deps",
requirements = "//tools/linter:requirements.txt",
)
load("@devtools_deps//:requirements.bzl", "install_deps")

install_deps()
13 changes: 11 additions & 2 deletions docs/_cpp_api/classtorch__tensorrt_1_1DataType.html
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@

<meta name="viewport" content="width=device-width, initial-scale=1.0">

<title>Class DataType &mdash; Torch-TensorRT v2.0.0.dev0+1fec519 documentation</title>
<title>Class DataType &mdash; Torch-TensorRT v2.2.0.dev0+50ab2c1 documentation</title>



Expand Down Expand Up @@ -225,7 +225,7 @@


<div class="version">
v2.0.0.dev0+1fec519
v2.2.0.dev0+50ab2c1
</div>


Expand Down Expand Up @@ -269,6 +269,8 @@
<li class="toctree-l1"><a class="reference internal" href="../user_guide/getting_started_with_fx_path.html">Torch-TensorRT (FX Frontend) User Guide</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/ptq.html">Post Training Quantization (PTQ)</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/runtime.html">Deploying Torch-TensorRT Programs</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/saving_models.html">Saving models compiled with Torch-TensorRT</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/dynamic_shapes.html">Dynamic shapes with Torch-TensorRT</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/use_from_pytorch.html">Using Torch-TensorRT Directly From PyTorch</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/using_dla.html">DLA</a></li>
</ul>
Expand Down Expand Up @@ -304,6 +306,7 @@
<ul>
<li class="toctree-l1"><a class="reference internal" href="../contributors/system_overview.html">System Overview</a></li>
<li class="toctree-l1"><a class="reference internal" href="../contributors/writing_converters.html">Writing Converters</a></li>
<li class="toctree-l1"><a class="reference internal" href="../contributors/writing_dynamo_aten_lowering_passes.html">Writing Dynamo ATen Lowering Passes</a></li>
<li class="toctree-l1"><a class="reference internal" href="../contributors/useful_links.html">Useful Links for Torch-TensorRT Development</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">Indices</span></p>
Expand Down Expand Up @@ -414,6 +417,12 @@ <h2>Class Documentation<a class="headerlink" href="#class-documentation" title="
<dd><p>INT64. </p>
</dd></dl>

<dl class="cpp enumerator">
<dt class="sig sig-object cpp" id="_CPPv4N14torch_tensorrt8DataType5Value7kDoubleE">
<span class="target" id="classtorch__tensorrt_1_1DataType_1a6335c0e206340d85a1382a5df17bf684aacf5b40b44995643185a977d2d1ce1bf"></span><span class="k"><span class="pre">enumerator</span></span><span class="w"> </span><span class="sig-name descname"><span class="n"><span class="pre">kDouble</span></span></span><a class="headerlink" href="#_CPPv4N14torch_tensorrt8DataType5Value7kDoubleE" title="Permalink to this definition">¶</a><br /></dt>
<dd><p>FP64. </p>
</dd></dl>

<dl class="cpp enumerator">
<dt class="sig sig-object cpp" id="_CPPv4N14torch_tensorrt8DataType5Value6kFloatE">
<span class="target" id="classtorch__tensorrt_1_1DataType_1a6335c0e206340d85a1382a5df17bf684a45ceda04c1ab50695a4a6aeaeae99817"></span><span class="k"><span class="pre">enumerator</span></span><span class="w"> </span><span class="sig-name descname"><span class="n"><span class="pre">kFloat</span></span></span><a class="headerlink" href="#_CPPv4N14torch_tensorrt8DataType5Value6kFloatE" title="Permalink to this definition">¶</a><br /></dt>
Expand Down
Loading