Skip to content

Commit 26a7d32

Browse files
authored
chore: remove pre-cxx11 abi references in doc (#3503)
1 parent 0da76f3 commit 26a7d32

File tree

3 files changed

+6
-38
lines changed

3 files changed

+6
-38
lines changed

docsrc/RELEASE_CHECKLIST.md

+1-2
Original file line numberDiff line numberDiff line change
@@ -63,9 +63,8 @@ will result in a minor version bump and significant bug fixes will result in a p
6363
- Paste in Milestone information and Changelog information into release notes
6464
- Generate libtorchtrt.tar.gz for the following platforms:
6565
- x86_64 cxx11-abi
66-
- x86_64 pre-cxx11-abi
6766
- TODO: Add cxx11-abi build for aarch64 when a manylinux container for aarch64 exists
68-
- Generate Python packages for Python 3.6/3.7/3.8/3.9 for x86_64
67+
- Generate Python packages for supported Python versions for x86_64
6968
- TODO: Build a manylinux container for aarch64
7069
- `docker run -it -v$(pwd)/..:/workspace/Torch-TensorRT build_torch_tensorrt_wheel /bin/bash /workspace/Torch-TensorRT/py/build_whl.sh` generates all wheels
7170
- To build container `docker build -t build_torch_tensorrt_wheel .`

docsrc/getting_started/installation.rst

+5-34
Original file line numberDiff line numberDiff line change
@@ -203,37 +203,20 @@ To build with debug symbols use the following command
203203
204204
A tarball with the include files and library can then be found in ``bazel-bin``
205205

206-
Pre CXX11 ABI Build
207-
............................
208-
209-
To build using the pre-CXX11 ABI use the ``pre_cxx11_abi`` config
210-
211-
.. code-block:: shell
212-
213-
bazel build //:libtorchtrt --config pre_cxx11_abi -c [dbg/opt]
214-
215-
A tarball with the include files and library can then be found in ``bazel-bin``
216-
217-
218206
.. _abis:
219207

220208
Choosing the Right ABI
221209
^^^^^^^^^^^^^^^^^^^^^^^^
222210

223-
Likely the most complicated thing about compiling Torch-TensorRT is selecting the correct ABI. There are two options
224-
which are incompatible with each other, pre-cxx11-abi and the cxx11-abi. The complexity comes from the fact that while
225-
the most popular distribution of PyTorch (wheels downloaded from pytorch.org/pypi directly) use the pre-cxx11-abi, most
226-
other distributions you might encounter (e.g. ones from NVIDIA - NGC containers, and builds for Jetson as well as certain
227-
libtorch builds and likely if you build PyTorch from source) use the cxx11-abi. It is important you compile Torch-TensorRT
228-
using the correct ABI to function properly. Below is a table with general pairings of PyTorch distribution sources and the
229-
recommended commands:
211+
For the old versions, there were two ABI options to compile Torch-TensorRT which were incompatible with each other,
212+
pre-cxx11-abi and cxx11-abi. The complexity came from the different distributions of PyTorch. Fortunately, PyTorch
213+
has switched to cxx11-abi for all distributions. Below is a table with general pairings of PyTorch distribution
214+
sources and the recommended commands:
230215

231216
+-------------------------------------------------------------+----------------------------------------------------------+--------------------------------------------------------------------+
232217
| PyTorch Source | Recommended Python Compilation Command | Recommended C++ Compilation Command |
233218
+=============================================================+==========================================================+====================================================================+
234-
| PyTorch whl file from PyTorch.org | python -m pip install . | bazel build //:libtorchtrt -c opt \-\-config pre_cxx11_abi |
235-
+-------------------------------------------------------------+----------------------------------------------------------+--------------------------------------------------------------------+
236-
| libtorch-shared-with-deps-*.zip from PyTorch.org | python -m pip install . | bazel build //:libtorchtrt -c opt \-\-config pre_cxx11_abi |
219+
| PyTorch whl file from PyTorch.org | python -m pip install . | bazel build //:libtorchtrt -c opt |
237220
+-------------------------------------------------------------+----------------------------------------------------------+--------------------------------------------------------------------+
238221
| libtorch-cxx11-abi-shared-with-deps-*.zip from PyTorch.org | python setup.py bdist_wheel | bazel build //:libtorchtrt -c opt |
239222
+-------------------------------------------------------------+----------------------------------------------------------+--------------------------------------------------------------------+
@@ -339,10 +322,6 @@ To build natively on aarch64-linux-gnu platform, configure the ``WORKSPACE`` wit
339322
In the case that you installed with ``sudo pip install`` this will be ``/usr/local/lib/python3.8/dist-packages/torch``.
340323
In the case you installed with ``pip install --user`` this will be ``$HOME/.local/lib/python3.8/site-packages/torch``.
341324

342-
In the case you are using NVIDIA compiled pip packages, set the path for both libtorch sources to the same path. This is because unlike
343-
PyTorch on x86_64, NVIDIA aarch64 PyTorch uses the CXX11-ABI. If you compiled for source using the pre_cxx11_abi and only would like to
344-
use that library, set the paths to the same path but when you compile make sure to add the flag ``--config=pre_cxx11_abi``
345-
346325
.. code-block:: shell
347326
348327
new_local_repository(
@@ -351,12 +330,6 @@ use that library, set the paths to the same path but when you compile make sure
351330
build_file = "third_party/libtorch/BUILD"
352331
)
353332
354-
new_local_repository(
355-
name = "libtorch_pre_cxx11_abi",
356-
path = "/usr/local/lib/python3.8/dist-packages/torch",
357-
build_file = "third_party/libtorch/BUILD"
358-
)
359-
360333
361334
Compile C++ Library and Compiler CLI
362335
........................................................
@@ -385,6 +358,4 @@ Compile the Python API using the following command from the ``//py`` directory:
385358
386359
python3 setup.py install
387360
388-
If you have a build of PyTorch that uses Pre-CXX11 ABI drop the ``--use-pre-cxx11-abi`` flag
389-
390361
If you are building for Jetpack 4.5 add the ``--jetpack-version 5.0`` flag

docsrc/user_guide/runtime.rst

-2
Original file line numberDiff line numberDiff line change
@@ -22,8 +22,6 @@ link ``libtorchtrt_runtime.so`` in your deployment programs or use ``DL_OPEN`` o
2222
you can load the runtime with ``torch.ops.load_library("libtorchtrt_runtime.so")``. You can then continue to use
2323
programs just as you would otherwise via PyTorch API.
2424

25-
.. note:: If you are using the standard distribution of PyTorch in Python on x86, likely you will need the pre-cxx11-abi variant of ``libtorchtrt_runtime.so``, check :ref:`Installation` documentation for more details.
26-
2725
.. note:: If you are linking ``libtorchtrt_runtime.so``, likely using the following flags will help ``-Wl,--no-as-needed -ltorchtrt -Wl,--as-needed`` as there's no direct symbol dependency to anything in the Torch-TensorRT runtime for most Torch-TensorRT runtime applications
2826

2927
An example of how to use ``libtorchtrt_runtime.so`` can be found here: https://github.com/pytorch/TensorRT/tree/master/examples/torchtrt_runtime_example

0 commit comments

Comments
 (0)