diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index b502353c95..dcfc1367c7 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -2,7 +2,7 @@
### Developing Torch-TensorRT
-Do try to fill an issue with your feature or bug before filling a PR (op support is generally an exception as long as you provide tests to prove functionality). There is also a backlog (https://github.com/NVIDIA/Torch-TensorRT/issues) of issues which are tagged with the area of focus, a coarse priority level and whether the issue may be accessible to new contributors. Let us know if you are interested in working on a issue. We are happy to provide guidance and mentorship for new contributors. Though note, there is no claiming of issues, we prefer getting working code quickly vs. addressing concerns about "wasted work".
+Do try to fill an issue with your feature or bug before filling a PR (op support is generally an exception as long as you provide tests to prove functionality). There is also a backlog (https://github.com/pytorch/TensorRT/issues) of issues which are tagged with the area of focus, a coarse priority level and whether the issue may be accessible to new contributors. Let us know if you are interested in working on a issue. We are happy to provide guidance and mentorship for new contributors. Though note, there is no claiming of issues, we prefer getting working code quickly vs. addressing concerns about "wasted work".
#### Communication
diff --git a/README.md b/README.md
index 550a95072e..da8f7656d9 100644
--- a/README.md
+++ b/README.md
@@ -118,7 +118,7 @@ These are the following dependencies used to verify the testcases. Torch-TensorR
## Prebuilt Binaries and Wheel files
-Releases: https://github.com/NVIDIA/Torch-TensorRT/releases
+Releases: https://github.com/pytorch/TensorRT/releases
## Compiling Torch-TensorRT
@@ -291,7 +291,7 @@ Supported Python versions:
### In Torch-TensorRT?
-Thanks for wanting to contribute! There are two main ways to handle supporting a new op. Either you can write a converter for the op from scratch and register it in the NodeConverterRegistry or if you can map the op to a set of ops that already have converters you can write a graph rewrite pass which will replace your new op with an equivalent subgraph of supported ops. Its preferred to use graph rewriting because then we do not need to maintain a large library of op converters. Also do look at the various op support trackers in the [issues](https://github.com/NVIDIA/Torch-TensorRT/issues) for information on the support status of various operators.
+Thanks for wanting to contribute! There are two main ways to handle supporting a new op. Either you can write a converter for the op from scratch and register it in the NodeConverterRegistry or if you can map the op to a set of ops that already have converters you can write a graph rewrite pass which will replace your new op with an equivalent subgraph of supported ops. Its preferred to use graph rewriting because then we do not need to maintain a large library of op converters. Also do look at the various op support trackers in the [issues](https://github.com/pytorch/TensorRT/issues) for information on the support status of various operators.
### In my application?
diff --git a/core/partitioning/README.md b/core/partitioning/README.md
index 2e328fa456..9fafa3e334 100644
--- a/core/partitioning/README.md
+++ b/core/partitioning/README.md
@@ -15,7 +15,7 @@ from the user. Shapes can be calculated by running the graphs with JIT.
it's still a phase in our partitioning process.
- `Stitching`. Stitch all TensorRT engines with PyTorch nodes altogether.
-Test cases for each of these components could be found [here](https://github.com/NVIDIA/Torch-TensorRT/tree/master/tests/core/partitioning).
+Test cases for each of these components could be found [here](https://github.com/pytorch/TensorRT/tree/master/tests/core/partitioning).
Here is the brief description of functionalities of each file:
- `PartitionInfo.h/cpp`: The automatic fallback APIs that is used for partitioning.
diff --git a/core/plugins/README.md b/core/plugins/README.md
index aa6138efde..4899394be3 100644
--- a/core/plugins/README.md
+++ b/core/plugins/README.md
@@ -37,4 +37,4 @@ If you'd like to compile your plugin with Torch-TensorRT,
Once you've completed the above steps, upon successful compilation of Torch-TensorRT library, your plugin should be available in `libtorchtrt_plugins.so`.
-A sample runtime application on how to run a network with plugins can be found here
+A sample runtime application on how to run a network with plugins can be found here
diff --git a/docsrc/conf.py b/docsrc/conf.py
index 04ff9ce66e..b386657e8c 100644
--- a/docsrc/conf.py
+++ b/docsrc/conf.py
@@ -123,7 +123,7 @@
"logo_icon": "",
# Set the repo location to get a badge with stats
- 'repo_url': 'https://github.com/nvidia/Torch-TensorRT/',
+ 'repo_url': 'https://github.com/pytorch/TensorRT/',
'repo_name': 'Torch-TensorRT',
# Visible levels of the global TOC; -1 means unlimited
diff --git a/docsrc/contributors/lowering.rst b/docsrc/contributors/lowering.rst
index 38c4491295..956c2004e1 100644
--- a/docsrc/contributors/lowering.rst
+++ b/docsrc/contributors/lowering.rst
@@ -33,7 +33,7 @@ Dead code elimination will check if a node has side effects and not delete it if
Eliminate Exeception Or Pass Pattern
***************************************
- `Torch-TensorRT/core/lowering/passes/exception_elimination.cpp `_
+ `Torch-TensorRT/core/lowering/passes/exception_elimination.cpp `_
A common pattern in scripted modules are dimension gaurds which will throw execptions if
the input dimension is not what was expected.
@@ -68,7 +68,7 @@ Freeze attributes and inline constants and modules. Propogates constants in the
Fuse AddMM Branches
***************************************
- `Torch-TensorRT/core/lowering/passes/fuse_addmm_branches.cpp `_
+ `Torch-TensorRT/core/lowering/passes/fuse_addmm_branches.cpp `_
A common pattern in scripted modules is tensors of different dimensions use different constructions for implementing linear layers. We fuse these
different varients into a single one that will get caught by the Unpack AddMM pass.
@@ -101,7 +101,7 @@ This pass fuse the addmm or matmul + add generated by JIT back to linear
Fuse Flatten Linear
***************************************
- `Torch-TensorRT/core/lowering/passes/fuse_flatten_linear.cpp `_
+ `Torch-TensorRT/core/lowering/passes/fuse_flatten_linear.cpp `_
TensorRT implicity flattens input layers into fully connected layers when they are higher than 1D. So when there is a
``aten::flatten`` -> ``aten::linear`` pattern we remove the ``aten::flatten``.
@@ -134,7 +134,7 @@ Removes _all_ tuples and raises an error if some cannot be removed, this is used
Module Fallback
*****************
- `Torch-TensorRT/core/lowering/passes/module_fallback.cpp `_
+ `Torch-TensorRT/core/lowering/passes/module_fallback.cpp `_
Module fallback consists of two lowering passes that must be run as a pair. The first pass is run before freezing to place delimiters in the graph around modules
that should run in PyTorch. The second pass marks nodes between these delimiters after freezing to signify they should run in PyTorch.
@@ -162,7 +162,7 @@ Right now, it does:
Remove Contiguous
***************************************
- `Torch-TensorRT/core/lowering/passes/remove_contiguous.cpp `_
+ `Torch-TensorRT/core/lowering/passes/remove_contiguous.cpp `_
Removes contiguous operators since we are doing TensorRT memory is already contiguous.
@@ -170,14 +170,14 @@ Removes contiguous operators since we are doing TensorRT memory is already conti
Remove Dropout
***************************************
- `Torch-TensorRT/core/lowering/passes/remove_dropout.cpp `_
+ `Torch-TensorRT/core/lowering/passes/remove_dropout.cpp `_
Removes dropout operators since we are doing inference.
Remove To
***************************************
- `Torch-TensorRT/core/lowering/passes/remove_to.cpp `_
+ `Torch-TensorRT/core/lowering/passes/remove_to.cpp `_
Removes ``aten::to`` operators that do casting, since TensorRT mangages it itself. It is important that this is one of the last passes run so that
other passes have a change to move required cast operators out of the main namespace.
@@ -185,7 +185,7 @@ other passes have a change to move required cast operators out of the main names
Unpack AddMM
***************************************
- `Torch-TensorRT/core/lowering/passes/unpack_addmm.cpp `_
+ `Torch-TensorRT/core/lowering/passes/unpack_addmm.cpp `_
Unpacks ``aten::addmm`` into ``aten::matmul`` and ``aten::add_`` (with an additional ``trt::const``
op to freeze the bias in the TensorRT graph). This lets us reuse the ``aten::matmul`` and ``aten::add_``
@@ -194,7 +194,7 @@ converters instead of needing a dedicated converter.
Unpack LogSoftmax
***************************************
- `Torch-TensorRT/core/lowering/passes/unpack_log_softmax.cpp `_
+ `Torch-TensorRT/core/lowering/passes/unpack_log_softmax.cpp `_
Unpacks ``aten::logsoftmax`` into ``aten::softmax`` and ``aten::log``. This lets us reuse the
``aten::softmax`` and ``aten::log`` converters instead of needing a dedicated converter.
diff --git a/docsrc/tutorials/installation.rst b/docsrc/tutorials/installation.rst
index 949fa2ddc9..4c4905db96 100644
--- a/docsrc/tutorials/installation.rst
+++ b/docsrc/tutorials/installation.rst
@@ -25,14 +25,14 @@ You can install the python package using
.. code-block:: sh
- pip3 install torch-tensorrt -f https://github.com/NVIDIA/Torch-TensorRT/releases
+ pip3 install torch-tensorrt -f https://github.com/pytorch/TensorRT/releases
.. _bin-dist:
C++ Binary Distribution
------------------------
-Precompiled tarballs for releases are provided here: https://github.com/NVIDIA/Torch-TensorRT/releases
+Precompiled tarballs for releases are provided here: https://github.com/pytorch/TensorRT/releases
.. _compile-from-source:
diff --git a/docsrc/tutorials/ptq.rst b/docsrc/tutorials/ptq.rst
index 7b7617289b..047fc9f40f 100644
--- a/docsrc/tutorials/ptq.rst
+++ b/docsrc/tutorials/ptq.rst
@@ -138,7 +138,7 @@ Then all thats required to setup the module for INT8 calibration is to set the f
If you have an existing Calibrator implementation for TensorRT you may directly set the ``ptq_calibrator`` field with a pointer to your calibrator and it will work as well.
From here not much changes in terms of how to execution works. You are still able to fully use LibTorch as the sole interface for inference. Data should remain
in FP32 precision when it's passed into `trt_mod.forward`. There exists an example application in the Torch-TensorRT demo that takes you from training a VGG16 network on
-CIFAR10 to deploying in INT8 with Torch-TensorRT here: https://github.com/NVIDIA/Torch-TensorRT/tree/master/cpp/ptq
+CIFAR10 to deploying in INT8 with Torch-TensorRT here: https://github.com/pytorch/TensorRT/tree/master/cpp/ptq
.. _writing_ptq_python:
@@ -199,8 +199,8 @@ to use ``CacheCalibrator`` to use in INT8 mode.
trt_mod = torch_tensorrt.compile(model, compile_settings)
If you already have an existing calibrator class (implemented directly using TensorRT API), you can directly set the calibrator field to your class which can be very convenient.
-For a demo on how PTQ can be performed on a VGG network using Torch-TensorRT API, you can refer to https://github.com/NVIDIA/Torch-TensorRT/blob/master/tests/py/test_ptq_dataloader_calibrator.py
-and https://github.com/NVIDIA/Torch-TensorRT/blob/master/tests/py/test_ptq_trt_calibrator.py
+For a demo on how PTQ can be performed on a VGG network using Torch-TensorRT API, you can refer to https://github.com/pytorch/TensorRT/blob/master/tests/py/test_ptq_dataloader_calibrator.py
+and https://github.com/pytorch/TensorRT/blob/master/tests/py/test_ptq_trt_calibrator.py
Citations
^^^^^^^^^^^
diff --git a/docsrc/tutorials/runtime.rst b/docsrc/tutorials/runtime.rst
index 77dd2c35b1..0cfc93200f 100644
--- a/docsrc/tutorials/runtime.rst
+++ b/docsrc/tutorials/runtime.rst
@@ -26,7 +26,7 @@ programs just as you would otherwise via PyTorch API.
.. note:: If you are linking ``libtorchtrt_runtime.so``, likely using the following flags will help ``-Wl,--no-as-needed -ltorchtrt -Wl,--as-needed`` as theres no direct symbol dependency to anything in the Torch-TensorRT runtime for most Torch-TensorRT runtime applications
-An example of how to use ``libtorchtrt_runtime.so`` can be found here: https://github.com/NVIDIA/Torch-TensorRT/tree/master/examples/torchtrt_example
+An example of how to use ``libtorchtrt_runtime.so`` can be found here: https://github.com/pytorch/TensorRT/tree/master/examples/torchtrt_runtime_example
Plugin Library
---------------
diff --git a/examples/custom_converters/README.md b/examples/custom_converters/README.md
index 603ec5b50b..7baab69da4 100644
--- a/examples/custom_converters/README.md
+++ b/examples/custom_converters/README.md
@@ -66,7 +66,7 @@ from torch.utils import cpp_extension
# library_dirs should point to the libtorch_tensorrt.so, include_dirs should point to the dir that include the headers
-# 1) download the latest package from https://github.com/NVIDIA/Torch-TensorRT/releases/
+# 1) download the latest package from https://github.com/pytorch/TensorRT/releases/
# 2) Extract the file from downloaded package, we will get the "torch_tensorrt" directory
# 3) Set torch_tensorrt_path to that directory
torch_tensorrt_path =
@@ -87,7 +87,7 @@ setup(
```
Make sure to include the path for header files in `include_dirs` and the path
for dependent libraries in `library_dirs`. Generally speaking, you should download
-the latest package from [here](https://github.com/NVIDIA/Torch-TensorRT/releases), extract
+the latest package from [here](https://github.com/pytorch/TensorRT/releases), extract
the files, and the set the `torch_tensorrt_path` to it. You could also add other compilation
flags in cpp_extension if you need. Then, run above python scripts as:
```shell
diff --git a/examples/custom_converters/elu_converter/setup.py b/examples/custom_converters/elu_converter/setup.py
index 6cbb9de888..1d28cec243 100644
--- a/examples/custom_converters/elu_converter/setup.py
+++ b/examples/custom_converters/elu_converter/setup.py
@@ -4,7 +4,7 @@
# library_dirs should point to the libtrtorch.so, include_dirs should point to the dir that include the headers
-# 1) download the latest package from https://github.com/NVIDIA/Torch-TensorRT/releases/
+# 1) download the latest package from https://github.com/pytorch/TensorRT/releases/
# 2) Extract the file from downloaded package, we will get the "trtorch" directory
# 3) Set trtorch_path to that directory
torchtrt_path =
diff --git a/examples/int8/ptq/README.md b/examples/int8/ptq/README.md
index 7878c88b15..4ac147bb6e 100644
--- a/examples/int8/ptq/README.md
+++ b/examples/int8/ptq/README.md
@@ -139,11 +139,11 @@ This will build a binary named `ptq` in `bazel-out/k8-/bin/cpp/int8/ptq
## Compilation using Makefile
-1) Download releases of LibTorch, Torch-TensorRT and TensorRT and unpack them in the deps directory.
+1) Download releases of LibTorch, Torch-TensorRT and TensorRT and unpack them in the deps directory.
```sh
cd examples/torch_tensorrtrt_example/deps
-# Download latest Torch-TensorRT release tar file (libtorch_tensorrt.tar.gz) from https://github.com/NVIDIA/Torch-TensorRT/releases
+# Download latest Torch-TensorRT release tar file (libtorch_tensorrt.tar.gz) from https://github.com/pytorch/TensorRT/releases
tar -xvzf libtorch_tensorrt.tar.gz
# unzip libtorch downloaded from pytorch.org
unzip libtorch.zip
diff --git a/examples/int8/qat/README.md b/examples/int8/qat/README.md
index f0c5773600..f47aea15b0 100644
--- a/examples/int8/qat/README.md
+++ b/examples/int8/qat/README.md
@@ -33,11 +33,11 @@ This will build a binary named `qat` in `bazel-out/k8-/bin/cpp/int8/qat
## Compilation using Makefile
-1) Download releases of LibTorch, Torch-TensorRT and TensorRT and unpack them in the deps directory. Ensure CUDA is installed at `/usr/local/cuda` , if not you need to modify the CUDA include and lib paths in the Makefile.
+1) Download releases of LibTorch, Torch-TensorRT and TensorRT and unpack them in the deps directory. Ensure CUDA is installed at `/usr/local/cuda` , if not you need to modify the CUDA include and lib paths in the Makefile.
```sh
cd examples/torch_tensorrt_example/deps
-# Download latest Torch-TensorRT release tar file (libtorch_tensorrt.tar.gz) from https://github.com/NVIDIA/Torch-TensorRT/releases
+# Download latest Torch-TensorRT release tar file (libtorch_tensorrt.tar.gz) from https://github.com/pytorch/TensorRT/releases
tar -xvzf libtorch_tensorrt.tar.gz
# unzip libtorch downloaded from pytorch.org
unzip libtorch.zip
diff --git a/examples/torchtrt_runtime_example/README.md b/examples/torchtrt_runtime_example/README.md
index e4454f0505..9effed5046 100644
--- a/examples/torchtrt_runtime_example/README.md
+++ b/examples/torchtrt_runtime_example/README.md
@@ -21,7 +21,7 @@ The main goal is to use Torch-TensorRT runtime library `libtorchtrt_runtime.so`,
```sh
cd examples/torch_tensorrtrt_example/deps
-// Download latest Torch-TensorRT release tar file (libtorch_tensorrt.tar.gz) from https://github.com/NVIDIA/Torch-TensorRT/releases
+// Download latest Torch-TensorRT release tar file (libtorch_tensorrt.tar.gz) from https://github.com/pytorch/TensorRT/releases
tar -xvzf libtorch_tensorrt.tar.gz
unzip libtorch-cxx11-abi-shared-with-deps-[PYTORCH_VERSION].zip
```
diff --git a/notebooks/CitriNet-example.ipynb b/notebooks/CitriNet-example.ipynb
index 0573af0176..57fdd02b07 100644
--- a/notebooks/CitriNet-example.ipynb
+++ b/notebooks/CitriNet-example.ipynb
@@ -929,7 +929,7 @@
"In this notebook, we have walked through the complete process of optimizing the Citrinet model with Torch-TensorRT. On an A100 GPU, with Torch-TensorRT, we observe a speedup of ~**2.4X** with FP32, and ~**2.9X** with FP16 at batchsize of 128.\n",
"\n",
"### What's next\n",
- "Now it's time to try Torch-TensorRT on your own model. Fill out issues at https://github.com/NVIDIA/Torch-TensorRT. Your involvement will help future development of Torch-TensorRT.\n"
+ "Now it's time to try Torch-TensorRT on your own model. Fill out issues at https://github.com/pytorch/TensorRT. Your involvement will help future development of Torch-TensorRT.\n"
]
},
{
diff --git a/notebooks/EfficientNet-example.ipynb b/notebooks/EfficientNet-example.ipynb
index 31a3dad874..a77e09545a 100644
--- a/notebooks/EfficientNet-example.ipynb
+++ b/notebooks/EfficientNet-example.ipynb
@@ -658,7 +658,7 @@
"In this notebook, we have walked through the complete process of compiling TorchScript models with Torch-TensorRT for EfficientNet-B0 model and test the performance impact of the optimization. With Torch-TensorRT, we observe a speedup of **1.35x** with FP32, and **3.13x** with FP16 on an NVIDIA 3090 GPU. These acceleration numbers will vary from GPU to GPU(as well as implementation to implementation based on the ops used) and we encorage you to try out latest generation of Data center compute cards for maximum acceleration.\n",
"\n",
"### What's next\n",
- "Now it's time to try Torch-TensorRT on your own model. If you run into any issues, you can fill them at https://github.com/NVIDIA/Torch-TensorRT. Your involvement will help future development of Torch-TensorRT.\n"
+ "Now it's time to try Torch-TensorRT on your own model. If you run into any issues, you can fill them at https://github.com/pytorch/TensorRT. Your involvement will help future development of Torch-TensorRT.\n"
]
},
{
diff --git a/notebooks/Hugging-Face-BERT.ipynb b/notebooks/Hugging-Face-BERT.ipynb
index 9b027b473e..3a3f467004 100644
--- a/notebooks/Hugging-Face-BERT.ipynb
+++ b/notebooks/Hugging-Face-BERT.ipynb
@@ -678,7 +678,7 @@
"Torch-TensorRT (FP16): 3.15x\n",
"\n",
"### What's next\n",
- "Now it's time to try Torch-TensorRT on your own model. If you run into any issues, you can fill them at https://github.com/NVIDIA/Torch-TensorRT. Your involvement will help future development of Torch-TensorRT."
+ "Now it's time to try Torch-TensorRT on your own model. If you run into any issues, you can fill them at https://github.com/pytorch/TensorRT. Your involvement will help future development of Torch-TensorRT."
]
},
{
diff --git a/notebooks/Resnet50-example.ipynb b/notebooks/Resnet50-example.ipynb
index f020662c73..f75a2b0e64 100644
--- a/notebooks/Resnet50-example.ipynb
+++ b/notebooks/Resnet50-example.ipynb
@@ -897,7 +897,7 @@
"In this notebook, we have walked through the complete process of compiling TorchScript models with Torch-TensorRT for EfficientNet-B0 model and test the performance impact of the optimization. With Torch-TensorRT, we observe a speedup of **1.84x** with FP32, and **5.2x** with FP16 on an NVIDIA 3090 GPU. These acceleration numbers will vary from GPU to GPU(as well as implementation to implementation based on the ops used) and we encorage you to try out latest generation of Data center compute cards for maximum acceleration.\n",
"\n",
"### What's next\n",
- "Now it's time to try Torch-TensorRT on your own model. If you run into any issues, you can fill them at https://github.com/NVIDIA/Torch-TensorRT. Your involvement will help future development of Torch-TensorRT.\n"
+ "Now it's time to try Torch-TensorRT on your own model. If you run into any issues, you can fill them at https://github.com/pytorch/TensorRT. Your involvement will help future development of Torch-TensorRT.\n"
]
}
],
diff --git a/notebooks/lenet-getting-started.ipynb b/notebooks/lenet-getting-started.ipynb
index 2db954946d..39ab31e5ca 100644
--- a/notebooks/lenet-getting-started.ipynb
+++ b/notebooks/lenet-getting-started.ipynb
@@ -690,7 +690,7 @@
"In this notebook, we have walked through the complete process of compiling TorchScript models with Torch-TensorRT and test the performance impact of the optimization.\n",
"\n",
"### What's next\n",
- "Now it's time to try Torch-TensorRT on your own model. Fill out issues at https://github.com/NVIDIA/Torch-TensorRT. Your involvement will help future development of Torch-TensorRT.\n"
+ "Now it's time to try Torch-TensorRT on your own model. Fill out issues at https://github.com/pytorch/TensorRT. Your involvement will help future development of Torch-TensorRT.\n"
]
}
],
diff --git a/notebooks/vgg-qat.ipynb b/notebooks/vgg-qat.ipynb
index cca771ad92..c8a7ac066d 100644
--- a/notebooks/vgg-qat.ipynb
+++ b/notebooks/vgg-qat.ipynb
@@ -35,7 +35,7 @@
"source": [
"\n",
"## 1. Requirements\n",
- "Please install the required dependencies and import these libraries accordingly"
+ "Please install the required dependencies and import these libraries accordingly"
]
},
{
@@ -1003,7 +1003,7 @@
"%quant_weight : Tensor = aten::fake_quantize_per_channel_affine(%394, %640, %641, %637, %638, %639)\n",
"%input.2 : Tensor = aten::_convolution(%quant_input, %quant_weight, %395, %687, %688, %689, %643, %690, %642, %643, %643, %644, %644)\n",
"```\n",
- "`aten::fake_quantize_per_*_affine` is converted into `QuantizeLayer` + `DequantizeLayer` in Torch-TensorRT internally. Please refer to quantization op converters in Torch-TensorRT."
+ "`aten::fake_quantize_per_*_affine` is converted into `QuantizeLayer` + `DequantizeLayer` in Torch-TensorRT internally. Please refer to quantization op converters in Torch-TensorRT."
]
},
{
@@ -1168,8 +1168,8 @@
"## 9. References\n",
"* Very Deep Convolution Networks for large scale Image Recognition\n",
"* Achieving FP32 Accuracy for INT8 Inference Using Quantization Aware Training with NVIDIA TensorRT\n",
- "* QAT workflow for VGG16\n",
- "* Deploying VGG QAT model in C++ using Torch-TensorRT\n",
+ "* QAT workflow for VGG16\n",
+ "* Deploying VGG QAT model in C++ using Torch-TensorRT\n",
"* Pytorch-quantization toolkit from NVIDIA\n",
"* Pytorch quantization toolkit userguide\n",
"* Quantization basics"
diff --git a/tests/README.md b/tests/README.md
index d00d3d8fde..d1ad177ea7 100644
--- a/tests/README.md
+++ b/tests/README.md
@@ -52,7 +52,7 @@ In order to **not** build the entire Torch-TensorRT library and only build the t
bazel test //tests --compilation_mode=dbg --test_output=summary --define torchtrt_src=prebuilt --jobs 2
```
- The flag `--define torchtrt_src=prebuilt` signals bazel to use pre-compiled library as an external dependency for tests. The pre-compiled library path is defined as a `local_repository` rule in root `WORKSPACE` file (`https://github.com/NVIDIA/Torch-TensorRT/blob/master/WORKSPACE`).
+ The flag `--define torchtrt_src=prebuilt` signals bazel to use pre-compiled library as an external dependency for tests. The pre-compiled library path is defined as a `local_repository` rule in root `WORKSPACE` file (`https://github.com/pytorch/TensorRT/blob/master/WORKSPACE`).
```
# External dependency for torch_tensorrt if you already have precompiled binaries.