Replies: 3 comments 1 reply
-
Hello @Spawnfile, I think, you may try to call openvino inference in the callback, which is passed to the first inference function. from openvino import AsyncInferQueue
infer_queue = AsyncInferQueue(compiled_model, num_request)
def callback():
infer_queue.start_async()
another_infer(data, callback) You may check AsyncInferQueue API here. |
Beta Was this translation helpful? Give feedback.
1 reply
-
Hi @Spawnfile, Please check samples with async:
Best regards, |
Beta Was this translation helpful? Give feedback.
0 replies
-
Have a look at this @Spawnfile : https://gist.github.com/dkurt/59a7e7b5b45f9c46b31e01ce47d098a2 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi all,
I've got 2 models which runs sequentially. One of them is detecting objects which is a non-openvino model, and the other one is a openvino public classifier model which runs on crops that generated from detection model. I'm able to implement openvino detection model which located at the top of network but I'm struggling to make the inference Async inside of the another inference pipeline. How can I implement my classifier openvino model inside other networks inference pipeline with C++ (Python snippet works for me as well) ?
Here is a python pseudo-code for better pipeline understanding,
Any help is appreciated
Greetings
Beta Was this translation helpful? Give feedback.
All reactions