-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PTQ support for ViT models #4002
Comments
If you use native trt-api to build network, you can ref |
Also you can follow this sample https://github.com/NVIDIA/TensorRT-Model-Optimizer/tree/main/onnx_ptq |
Hi, sorry for the delayed answer. Here are the layer logs for the two models (with vision_vit_best.log timm_vit_best.log Also, I am using the simplest stock ViT models (see the reproduction script in the original post), so you should theoretically be able to reproduce my results and get any extra debugging information you need. Regarding TensorRT-Model-Optimizer, I'll try it, but the current situation is honestly quite annoying. There are too many supposedly "official" (or at least endorsed) ways to do the same thing and most of them either don't work at all, or produce suboptimal results (and they often don't give easily interpretable outputs that could be used to verify that they are doing the right thing). Here's a non-exhaustive list of supposedly "official" (endorsed by either PyTorch or TensorRT) quantization methods that support post-training quantization of PyTorch models for TensorRT inference:
I think that TensorRT and PyTorch could benefit from concentrating their efforts on a single project instead of duplicating the development efforts. |
I think you can ref https://github.com/NVIDIA/TensorRT/tree/release/10.2/demo/BERT |
PyTorch-Quantization is a toolkit for training and evaluating PyTorch models with simulated quantization. Quantization can be added to the model automatically, or manually, allowing the model to be tuned for accuracy and performance. Quantization is compatible with NVIDIAs high performance integer kernels which leverage integer Tensor Cores. The quantized model can be exported to ONNX and imported by TensorRT 8.0 and later. PyTorch ao ( Eager Mode Quantization, FX Graph Mode Quantization) At lower level, PyTorch provides a way to represent quantized tensors and perform operations with them. They can be used to directly construct models that perform all or part of the computation in lower precision. Higher-level APIs are provided that incorporate typical workflows of converting FP32 model to lower precision with minimal accuracy loss. And if you use pytorch qat to get a quant.onnx, which not support by trtexec. (build failed) |
Description
I am trying to figure out if TensoRT and the
pytorch_quantization
module support post-training quantization for vision transformers.The following piece of code follows the
pytorch_quantization
docs almost verbatim (with small changes for compatibility):After that, I visualize the resulting engine graph with
trex
:The conversion succeeds, however, the graph barely uses any INT8 operations. I would have expected almost the whole graph to consist of
Int8
operators, but instead most edges in the graph are labeled asFloat
with only a fewInt8
s.Is this expected? My understanding was that most operators in transformers were supposed to be quantizable (with the notable exception of
LayerNorm
andSoftmax
, which would require special custom layers for quantization).Relevant Files
vit_tiny_patch16_224 (timm)
vit_b_16 (torchvision)
Environment
TensorRT Version: 10.0.0.6
NVIDIA GPU: NVIDIA RTX A6000
NVIDIA Driver Version: 535.171.04
CUDA Version: 12.2
CUDNN Version: 8
Operating System: Ubuntu 22.04
Python Version (if applicable): 3.10.12
PyTorch Version (if applicable): 2.3.0
Baremetal or Container (if so, version):
nvidia/cuda:12.1.1-cudnn8-devel-ubuntu22.04
docker containerThe text was updated successfully, but these errors were encountered: