Skip to content

fix: Update broken repo hyperlink #1131

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jun 21, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

### Developing Torch-TensorRT

Do try to fill an issue with your feature or bug before filling a PR (op support is generally an exception as long as you provide tests to prove functionality). There is also a backlog (https://github.com/NVIDIA/Torch-TensorRT/issues) of issues which are tagged with the area of focus, a coarse priority level and whether the issue may be accessible to new contributors. Let us know if you are interested in working on a issue. We are happy to provide guidance and mentorship for new contributors. Though note, there is no claiming of issues, we prefer getting working code quickly vs. addressing concerns about "wasted work".
Do try to fill an issue with your feature or bug before filling a PR (op support is generally an exception as long as you provide tests to prove functionality). There is also a backlog (https://github.com/pytorch/TensorRT/issues) of issues which are tagged with the area of focus, a coarse priority level and whether the issue may be accessible to new contributors. Let us know if you are interested in working on a issue. We are happy to provide guidance and mentorship for new contributors. Though note, there is no claiming of issues, we prefer getting working code quickly vs. addressing concerns about "wasted work".

#### Communication

Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ These are the following dependencies used to verify the testcases. Torch-TensorR

## Prebuilt Binaries and Wheel files

Releases: https://github.com/NVIDIA/Torch-TensorRT/releases
Releases: https://github.com/pytorch/TensorRT/releases

## Compiling Torch-TensorRT

Expand Down Expand Up @@ -291,7 +291,7 @@ Supported Python versions:

### In Torch-TensorRT?

Thanks for wanting to contribute! There are two main ways to handle supporting a new op. Either you can write a converter for the op from scratch and register it in the NodeConverterRegistry or if you can map the op to a set of ops that already have converters you can write a graph rewrite pass which will replace your new op with an equivalent subgraph of supported ops. Its preferred to use graph rewriting because then we do not need to maintain a large library of op converters. Also do look at the various op support trackers in the [issues](https://github.com/NVIDIA/Torch-TensorRT/issues) for information on the support status of various operators.
Thanks for wanting to contribute! There are two main ways to handle supporting a new op. Either you can write a converter for the op from scratch and register it in the NodeConverterRegistry or if you can map the op to a set of ops that already have converters you can write a graph rewrite pass which will replace your new op with an equivalent subgraph of supported ops. Its preferred to use graph rewriting because then we do not need to maintain a large library of op converters. Also do look at the various op support trackers in the [issues](https://github.com/pytorch/TensorRT/issues) for information on the support status of various operators.

### In my application?

Expand Down
2 changes: 1 addition & 1 deletion core/partitioning/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ from the user. Shapes can be calculated by running the graphs with JIT.
it's still a phase in our partitioning process.
- `Stitching`. Stitch all TensorRT engines with PyTorch nodes altogether.

Test cases for each of these components could be found [here](https://github.com/NVIDIA/Torch-TensorRT/tree/master/tests/core/partitioning).
Test cases for each of these components could be found [here](https://github.com/pytorch/TensorRT/tree/master/tests/core/partitioning).

Here is the brief description of functionalities of each file:
- `PartitionInfo.h/cpp`: The automatic fallback APIs that is used for partitioning.
Expand Down
2 changes: 1 addition & 1 deletion core/plugins/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,4 +37,4 @@ If you'd like to compile your plugin with Torch-TensorRT,

Once you've completed the above steps, upon successful compilation of Torch-TensorRT library, your plugin should be available in `libtorchtrt_plugins.so`.

A sample runtime application on how to run a network with plugins can be found <a href="https://github.com/NVIDIA/Torch-TensorRT/tree/master/examples/torchtrt_runtime_example" >here</a>
A sample runtime application on how to run a network with plugins can be found <a href="https://github.com/pytorch/TensorRT/tree/master/examples/torchtrt_runtime_example" >here</a>
2 changes: 1 addition & 1 deletion docsrc/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@
"logo_icon": "&#xe86f",

# Set the repo location to get a badge with stats
'repo_url': 'https://github.com/nvidia/Torch-TensorRT/',
'repo_url': 'https://github.com/pytorch/TensorRT/',
'repo_name': 'Torch-TensorRT',

# Visible levels of the global TOC; -1 means unlimited
Expand Down
18 changes: 9 additions & 9 deletions docsrc/contributors/lowering.rst
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ Dead code elimination will check if a node has side effects and not delete it if
Eliminate Exeception Or Pass Pattern
***************************************

`Torch-TensorRT/core/lowering/passes/exception_elimination.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/exception_elimination.cpp>`_
`Torch-TensorRT/core/lowering/passes/exception_elimination.cpp <https://github.com/pytorch/TensorRT/blob/master/core/lowering/passes/exception_elimination.cpp>`_

A common pattern in scripted modules are dimension gaurds which will throw execptions if
the input dimension is not what was expected.
Expand Down Expand Up @@ -68,7 +68,7 @@ Freeze attributes and inline constants and modules. Propogates constants in the
Fuse AddMM Branches
***************************************

`Torch-TensorRT/core/lowering/passes/fuse_addmm_branches.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/fuse_addmm_branches.cpp>`_
`Torch-TensorRT/core/lowering/passes/fuse_addmm_branches.cpp <https://github.com/pytorch/TensorRT/blob/master/core/lowering/passes/fuse_addmm_branches.cpp>`_

A common pattern in scripted modules is tensors of different dimensions use different constructions for implementing linear layers. We fuse these
different varients into a single one that will get caught by the Unpack AddMM pass.
Expand Down Expand Up @@ -101,7 +101,7 @@ This pass fuse the addmm or matmul + add generated by JIT back to linear
Fuse Flatten Linear
***************************************

`Torch-TensorRT/core/lowering/passes/fuse_flatten_linear.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/fuse_flatten_linear.cpp>`_
`Torch-TensorRT/core/lowering/passes/fuse_flatten_linear.cpp <https://github.com/pytorch/TensorRT/blob/master/core/lowering/passes/fuse_flatten_linear.cpp>`_

TensorRT implicity flattens input layers into fully connected layers when they are higher than 1D. So when there is a
``aten::flatten`` -> ``aten::linear`` pattern we remove the ``aten::flatten``.
Expand Down Expand Up @@ -134,7 +134,7 @@ Removes _all_ tuples and raises an error if some cannot be removed, this is used
Module Fallback
*****************

`Torch-TensorRT/core/lowering/passes/module_fallback.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/module_fallback.cpp>`_
`Torch-TensorRT/core/lowering/passes/module_fallback.cpp <https://github.com/pytorch/TensorRT/blob/master/core/lowering/passes/module_fallback.cpp>`_

Module fallback consists of two lowering passes that must be run as a pair. The first pass is run before freezing to place delimiters in the graph around modules
that should run in PyTorch. The second pass marks nodes between these delimiters after freezing to signify they should run in PyTorch.
Expand Down Expand Up @@ -162,30 +162,30 @@ Right now, it does:
Remove Contiguous
***************************************

`Torch-TensorRT/core/lowering/passes/remove_contiguous.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/remove_contiguous.cpp>`_
`Torch-TensorRT/core/lowering/passes/remove_contiguous.cpp <https://github.com/pytorch/TensorRT/blob/master/core/lowering/passes/remove_contiguous.cpp>`_

Removes contiguous operators since we are doing TensorRT memory is already contiguous.


Remove Dropout
***************************************

`Torch-TensorRT/core/lowering/passes/remove_dropout.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/remove_dropout.cpp>`_
`Torch-TensorRT/core/lowering/passes/remove_dropout.cpp <https://github.com/pytorch/TensorRT/blob/master/core/lowering/passes/remove_dropout.cpp>`_

Removes dropout operators since we are doing inference.

Remove To
***************************************

`Torch-TensorRT/core/lowering/passes/remove_to.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/remove_to.cpp>`_
`Torch-TensorRT/core/lowering/passes/remove_to.cpp <https://github.com/pytorch/TensorRT/blob/master/core/lowering/passes/remove_to.cpp>`_

Removes ``aten::to`` operators that do casting, since TensorRT mangages it itself. It is important that this is one of the last passes run so that
other passes have a change to move required cast operators out of the main namespace.

Unpack AddMM
***************************************

`Torch-TensorRT/core/lowering/passes/unpack_addmm.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/unpack_addmm.cpp>`_
`Torch-TensorRT/core/lowering/passes/unpack_addmm.cpp <https://github.com/pytorch/TensorRT/blob/master/core/lowering/passes/unpack_addmm.cpp>`_

Unpacks ``aten::addmm`` into ``aten::matmul`` and ``aten::add_`` (with an additional ``trt::const``
op to freeze the bias in the TensorRT graph). This lets us reuse the ``aten::matmul`` and ``aten::add_``
Expand All @@ -194,7 +194,7 @@ converters instead of needing a dedicated converter.
Unpack LogSoftmax
***************************************

`Torch-TensorRT/core/lowering/passes/unpack_log_softmax.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/unpack_log_softmax.cpp>`_
`Torch-TensorRT/core/lowering/passes/unpack_log_softmax.cpp <https://github.com/pytorch/TensorRT/blob/master/core/lowering/passes/unpack_log_softmax.cpp>`_

Unpacks ``aten::logsoftmax`` into ``aten::softmax`` and ``aten::log``. This lets us reuse the
``aten::softmax`` and ``aten::log`` converters instead of needing a dedicated converter.
Expand Down
4 changes: 2 additions & 2 deletions docsrc/tutorials/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -25,14 +25,14 @@ You can install the python package using

.. code-block:: sh

pip3 install torch-tensorrt -f https://github.com/NVIDIA/Torch-TensorRT/releases
pip3 install torch-tensorrt -f https://github.com/pytorch/TensorRT/releases

.. _bin-dist:

C++ Binary Distribution
------------------------

Precompiled tarballs for releases are provided here: https://github.com/NVIDIA/Torch-TensorRT/releases
Precompiled tarballs for releases are provided here: https://github.com/pytorch/TensorRT/releases

.. _compile-from-source:

Expand Down
6 changes: 3 additions & 3 deletions docsrc/tutorials/ptq.rst
Original file line number Diff line number Diff line change
Expand Up @@ -138,7 +138,7 @@ Then all thats required to setup the module for INT8 calibration is to set the f
If you have an existing Calibrator implementation for TensorRT you may directly set the ``ptq_calibrator`` field with a pointer to your calibrator and it will work as well.
From here not much changes in terms of how to execution works. You are still able to fully use LibTorch as the sole interface for inference. Data should remain
in FP32 precision when it's passed into `trt_mod.forward`. There exists an example application in the Torch-TensorRT demo that takes you from training a VGG16 network on
CIFAR10 to deploying in INT8 with Torch-TensorRT here: https://github.com/NVIDIA/Torch-TensorRT/tree/master/cpp/ptq
CIFAR10 to deploying in INT8 with Torch-TensorRT here: https://github.com/pytorch/TensorRT/tree/master/cpp/ptq

.. _writing_ptq_python:

Expand Down Expand Up @@ -199,8 +199,8 @@ to use ``CacheCalibrator`` to use in INT8 mode.
trt_mod = torch_tensorrt.compile(model, compile_settings)

If you already have an existing calibrator class (implemented directly using TensorRT API), you can directly set the calibrator field to your class which can be very convenient.
For a demo on how PTQ can be performed on a VGG network using Torch-TensorRT API, you can refer to https://github.com/NVIDIA/Torch-TensorRT/blob/master/tests/py/test_ptq_dataloader_calibrator.py
and https://github.com/NVIDIA/Torch-TensorRT/blob/master/tests/py/test_ptq_trt_calibrator.py
For a demo on how PTQ can be performed on a VGG network using Torch-TensorRT API, you can refer to https://github.com/pytorch/TensorRT/blob/master/tests/py/test_ptq_dataloader_calibrator.py
and https://github.com/pytorch/TensorRT/blob/master/tests/py/test_ptq_trt_calibrator.py

Citations
^^^^^^^^^^^
Expand Down
2 changes: 1 addition & 1 deletion docsrc/tutorials/runtime.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ programs just as you would otherwise via PyTorch API.

.. note:: If you are linking ``libtorchtrt_runtime.so``, likely using the following flags will help ``-Wl,--no-as-needed -ltorchtrt -Wl,--as-needed`` as theres no direct symbol dependency to anything in the Torch-TensorRT runtime for most Torch-TensorRT runtime applications

An example of how to use ``libtorchtrt_runtime.so`` can be found here: https://github.com/NVIDIA/Torch-TensorRT/tree/master/examples/torchtrt_example
An example of how to use ``libtorchtrt_runtime.so`` can be found here: https://github.com/pytorch/TensorRT/tree/master/examples/torchtrt_runtime_example

Plugin Library
---------------
Expand Down
4 changes: 2 additions & 2 deletions examples/custom_converters/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ from torch.utils import cpp_extension


# library_dirs should point to the libtorch_tensorrt.so, include_dirs should point to the dir that include the headers
# 1) download the latest package from https://github.com/NVIDIA/Torch-TensorRT/releases/
# 1) download the latest package from https://github.com/pytorch/TensorRT/releases/
# 2) Extract the file from downloaded package, we will get the "torch_tensorrt" directory
# 3) Set torch_tensorrt_path to that directory
torch_tensorrt_path = <PATH TO TRTORCH>
Expand All @@ -87,7 +87,7 @@ setup(
```
Make sure to include the path for header files in `include_dirs` and the path
for dependent libraries in `library_dirs`. Generally speaking, you should download
the latest package from [here](https://github.com/NVIDIA/Torch-TensorRT/releases), extract
the latest package from [here](https://github.com/pytorch/TensorRT/releases), extract
the files, and the set the `torch_tensorrt_path` to it. You could also add other compilation
flags in cpp_extension if you need. Then, run above python scripts as:
```shell
Expand Down
2 changes: 1 addition & 1 deletion examples/custom_converters/elu_converter/setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@


# library_dirs should point to the libtrtorch.so, include_dirs should point to the dir that include the headers
# 1) download the latest package from https://github.com/NVIDIA/Torch-TensorRT/releases/
# 1) download the latest package from https://github.com/pytorch/TensorRT/releases/
# 2) Extract the file from downloaded package, we will get the "trtorch" directory
# 3) Set trtorch_path to that directory
torchtrt_path = <PATH TO TORCHTRT>
Expand Down
4 changes: 2 additions & 2 deletions examples/int8/ptq/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -139,11 +139,11 @@ This will build a binary named `ptq` in `bazel-out/k8-<opt|dbg>/bin/cpp/int8/ptq

## Compilation using Makefile

1) Download releases of <a href="https://pytorch.org">LibTorch</a>, <a href="https://github.com/NVIDIA/Torch-TensorRT/releases">Torch-TensorRT </a>and <a href="https://developer.nvidia.com/nvidia-tensorrt-download">TensorRT</a> and unpack them in the deps directory.
1) Download releases of <a href="https://pytorch.org">LibTorch</a>, <a href="https://github.com/pytorch/TensorRT/releases">Torch-TensorRT </a>and <a href="https://developer.nvidia.com/nvidia-tensorrt-download">TensorRT</a> and unpack them in the deps directory.

```sh
cd examples/torch_tensorrtrt_example/deps
# Download latest Torch-TensorRT release tar file (libtorch_tensorrt.tar.gz) from https://github.com/NVIDIA/Torch-TensorRT/releases
# Download latest Torch-TensorRT release tar file (libtorch_tensorrt.tar.gz) from https://github.com/pytorch/TensorRT/releases
tar -xvzf libtorch_tensorrt.tar.gz
# unzip libtorch downloaded from pytorch.org
unzip libtorch.zip
Expand Down
4 changes: 2 additions & 2 deletions examples/int8/qat/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,11 +33,11 @@ This will build a binary named `qat` in `bazel-out/k8-<opt|dbg>/bin/cpp/int8/qat

## Compilation using Makefile

1) Download releases of <a href="https://pytorch.org">LibTorch</a>, <a href="https://github.com/NVIDIA/Torch-TensorRT/releases">Torch-TensorRT </a>and <a href="https://developer.nvidia.com/nvidia-tensorrt-download">TensorRT</a> and unpack them in the deps directory. Ensure CUDA is installed at `/usr/local/cuda` , if not you need to modify the CUDA include and lib paths in the Makefile.
1) Download releases of <a href="https://pytorch.org">LibTorch</a>, <a href="https://github.com/pytorch/TensorRT/releases">Torch-TensorRT </a>and <a href="https://developer.nvidia.com/nvidia-tensorrt-download">TensorRT</a> and unpack them in the deps directory. Ensure CUDA is installed at `/usr/local/cuda` , if not you need to modify the CUDA include and lib paths in the Makefile.

```sh
cd examples/torch_tensorrt_example/deps
# Download latest Torch-TensorRT release tar file (libtorch_tensorrt.tar.gz) from https://github.com/NVIDIA/Torch-TensorRT/releases
# Download latest Torch-TensorRT release tar file (libtorch_tensorrt.tar.gz) from https://github.com/pytorch/TensorRT/releases
tar -xvzf libtorch_tensorrt.tar.gz
# unzip libtorch downloaded from pytorch.org
unzip libtorch.zip
Expand Down
2 changes: 1 addition & 1 deletion examples/torchtrt_runtime_example/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ The main goal is to use Torch-TensorRT runtime library `libtorchtrt_runtime.so`,

```sh
cd examples/torch_tensorrtrt_example/deps
// Download latest Torch-TensorRT release tar file (libtorch_tensorrt.tar.gz) from https://github.com/NVIDIA/Torch-TensorRT/releases
// Download latest Torch-TensorRT release tar file (libtorch_tensorrt.tar.gz) from https://github.com/pytorch/TensorRT/releases
tar -xvzf libtorch_tensorrt.tar.gz
unzip libtorch-cxx11-abi-shared-with-deps-[PYTORCH_VERSION].zip
```
Expand Down
2 changes: 1 addition & 1 deletion notebooks/CitriNet-example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -929,7 +929,7 @@
"In this notebook, we have walked through the complete process of optimizing the Citrinet model with Torch-TensorRT. On an A100 GPU, with Torch-TensorRT, we observe a speedup of ~**2.4X** with FP32, and ~**2.9X** with FP16 at batchsize of 128.\n",
"\n",
"### What's next\n",
"Now it's time to try Torch-TensorRT on your own model. Fill out issues at https://github.com/NVIDIA/Torch-TensorRT. Your involvement will help future development of Torch-TensorRT.\n"
"Now it's time to try Torch-TensorRT on your own model. Fill out issues at https://github.com/pytorch/TensorRT. Your involvement will help future development of Torch-TensorRT.\n"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion notebooks/EfficientNet-example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -658,7 +658,7 @@
"In this notebook, we have walked through the complete process of compiling TorchScript models with Torch-TensorRT for EfficientNet-B0 model and test the performance impact of the optimization. With Torch-TensorRT, we observe a speedup of **1.35x** with FP32, and **3.13x** with FP16 on an NVIDIA 3090 GPU. These acceleration numbers will vary from GPU to GPU(as well as implementation to implementation based on the ops used) and we encorage you to try out latest generation of Data center compute cards for maximum acceleration.\n",
"\n",
"### What's next\n",
"Now it's time to try Torch-TensorRT on your own model. If you run into any issues, you can fill them at https://github.com/NVIDIA/Torch-TensorRT. Your involvement will help future development of Torch-TensorRT.\n"
"Now it's time to try Torch-TensorRT on your own model. If you run into any issues, you can fill them at https://github.com/pytorch/TensorRT. Your involvement will help future development of Torch-TensorRT.\n"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion notebooks/Hugging-Face-BERT.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -678,7 +678,7 @@
"Torch-TensorRT (FP16): 3.15x\n",
"\n",
"### What's next\n",
"Now it's time to try Torch-TensorRT on your own model. If you run into any issues, you can fill them at https://github.com/NVIDIA/Torch-TensorRT. Your involvement will help future development of Torch-TensorRT."
"Now it's time to try Torch-TensorRT on your own model. If you run into any issues, you can fill them at https://github.com/pytorch/TensorRT. Your involvement will help future development of Torch-TensorRT."
]
},
{
Expand Down
Loading