install onnx
阿里云国内75折 回扣 微信号:monov8 |
阿里云国际,腾讯云国际,低至75折。AWS 93折 免费开户实名账号 代冲值 优惠多多 微信号:monov8 飞机:@monov6 |
添加链接描述
install onnx_tensort
这是一个将onnx 转换成tensort的小工具
和tensorrt 的内置工具类似tensorrt转onnx为engine的用法
你需要下载onnx 源码放在3dparty 下
你还需要下载tensorrt8
我在网盘里面 放了整个项目和数据
链接: https://pan.baidu.com/s/1pHs5Qdeqmz4ppGDHPSTnOQ?pwd=7djr
提取码: 7djr
复制onnx文件到build文件夹 并且执行
TensorRT Backend For ONNX
Parses ONNX models for execution with TensorRT.
See also the TensorRT documentation.
For the list of recent changes, see the changelog.
For a list of commonly seen issues and questions, see the FAQ.
For business inquiries, please contact researchinquiries@nvidia.com
For press and other inquiries, please contact Hector Marinez at hmarinez@nvidia.com
Supported TensorRT Versions
Development on the Master branch is for the latest version of TensorRT 8.2.3.0 with full-dimensions and dynamic shape support.
For previous versions of TensorRT, refer to their respective branches.
Full Dimensions + Dynamic Shapes
Building INetwork objects in full dimensions mode with dynamic shape support requires calling the following API:
C++
Python
For examples of usage of these APIs see:
Supported Operators
Current supported ONNX operators are found in the operator support matrix.
Installation
Dependencies
Building
For building within docker, we recommend using and setting up the docker containers as instructed in the main (TensorRT repository)[https://github.com/NVIDIA/TensorRT#setting-up-the-build-environment] to build the onnx-tensorrt library.
Once you have cloned the repository, you can build the parser libraries and executables by running:
Note that this project has a dependency on CUDA. By default the build will look in /usr/local/cuda
for the CUDA toolkit installation. If your CUDA path is different, overwrite the default path by providing -DCUDA_TOOLKIT_ROOT_DIR=<path_to_cuda_install>
in the CMake command.
For building only the libraries, append -DBUILD_LIBRARY_ONLY=1
to the CMake build command.
Experimental Ops
All experimental operators will be considered unsupported by the ONNX-TRT’s supportsModel()
function.
NonMaxSuppression
is available as an experimental operator in TensorRT 8. It has the limitation that the output shape is always padded to length [max_output_boxes_per_class
, 3], therefore some post processing is required to extract the valid indices.
Executable Usage
ONNX models can be converted to serialized TensorRT engines using the onnx2trt
executable:
ONNX models can also be converted to human-readable text:
ONNX models can also be optimized by ONNX’s optimization libraries (added by dsandler).
To optimize an ONNX model and output a new one use -m
to specify the output model name and -O
to specify a semicolon-separated list of optimization passes to apply:
See more all available optimization passes by running:
See more usage information by running:
Python Modules
Python bindings for the ONNX-TensorRT parser are packaged in the shipped .whl
files. Install them with
TensorRT 8.2.1.8 supports ONNX release 1.8.0. Install it with:
The ONNX-TensorRT backend can be installed by running:
ONNX-TensorRT Python Backend Usage
The TensorRT backend for ONNX can be used in Python as follows:
C++ Library Usage
The model parser library, libnvonnxparser.so, has its C++ API declared in this header:
Tests
After installation (or inside the Docker container), ONNX backend tests can be run as follows:
Real model tests only:
All tests:
You can use -v
flag to make output more verbose.
Pre-trained Models
Pre-trained models in ONNX format can be found at the ONNX Model Zoo
阿里云国内75折 回扣 微信号:monov8 |
阿里云国际,腾讯云国际,低至75折。AWS 93折 免费开户实名账号 代冲值 优惠多多 微信号:monov8 飞机:@monov6 |