-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
trtexec error: IPluginRegistry::getCreator: Error Code 4: Cannot find plugin: grid_sampler, version: 1 #4160
Comments
I also noticed these Execution Provider requirements listed on the ONNX runtime webpage:
Based on these, it looks like the Does this mean that getting the ONNX model converted to a TRT plan with the latest TRT version 10.3 but a custom op built with ONNX runtime 1.15.1 is just not going to be possible? Or is there a way to achieve this? |
No, it due to grid_sampler. Try to export LD_LIBRARY_PATH=/opt/tritonserver/backends/onnxruntime/libmmdeploy_onnxruntime_ops.so:$LD_LIBRARY_PATH then rerun. |
@lix19937 - that did not work. I see the same error:
|
Another WBR, you can use torch.nn.functional.grid_sample to repalce the mm version, now grid sample is a built-in layer in trt8.6, so you donot load plugin. @kelkarn |
And please make sure you are using the latest Opset version 17 to export onnx. |
我使用了opset_version = 17来导出onnx,但是在tensorrt 10.6和cuda10.8中报错显示IPluginRegistry::getCreator: Error Code 4: API Usage Error (Cannot find plugin: TRTBatchedNMS, version: 1, namespace:.),我在mmdeploy中已经编译了mmdeploy_tensorrt_ops.dll |
Make sure your .dll ld path add in env path in windows system. |
I see the following error when I run my trtexec command:
from within the container:
This error is followed by a bunch of errors on the
Unsqueeze
node like so:The model here is a DINO model in ONNX format converted to ONNX using MMDeploy, and a custom op. The custom op symbol in
libmmdeploy_onnxruntime_ops.so
useslibonnxruntime.so.1.15.1
which I have also copied into the Docker container, and added to myLD_LIBRARY_PATH
. I am using thenvcr.io/nvidia/tensorrt:24.08-py3
Docker image and thetrtexec
binary built with TensorRT 10.3.I found this other similar issue: onnx/onnx-tensorrt#800
The conclusion there was, that TensorRT does not support the
Round
operation yet. Is that the same conclusion here? I.e. thegrid_sampler
operation is not supported in TensorRT yet? There's an issue for this too that I found (Issue#2612) that was marked 'Closed', but it looks like my issue is the exact same here.The text was updated successfully, but these errors were encountered: