Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Customizing multi camera streaming pipeline #9

Open
sharoseali opened this issue Sep 1, 2022 · 7 comments
Open

Customizing multi camera streaming pipeline #9

sharoseali opened this issue Sep 1, 2022 · 7 comments

Comments

@sharoseali
Copy link

sharoseali commented Sep 1, 2022

Hi, Thanks for your great work. I have a query regarding the multi-camera stream pipeline. In my case, I have a config file having 4 video sources in which I am applying object detection using Yolov5. Later after detecting objects, I have to calculate distances and had to do other customization on bounding boxes and acquired frames. How can I acquire this data in my python script? Any suggestion, please? @joxis

@julestalloen
Copy link
Member

Hi @sharoseali! Glad you like the repository 🙂

Could you provide a bit more information on what exactly you want to achieve? Do you need the timestamps at which objects were detected? With the current implementation you can already access frames and detections via a probe.

@sharoseali
Copy link
Author

sharoseali commented Sep 2, 2022

Here is the working config file that I use to get a video stream from 4 video files. Now I want to get these frames and corresponding detections for customization and display again them on 4 screens.

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5

[tiled-display]
enable=1
rows=2
columns=2
width=1280
height=720
gpu-id=0
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=1
camera-width=640
camera-height=480
camera-fps-n=30
camera-fps-d=1
camera-v4l2-dev-node=4


[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=1
camera-width=640
camera-height=480
camera-fps-n=30
camera-fps-d=1
camera-v4l2-dev-node=2

[source2]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=1
camera-width=640
camera-height=480
camera-fps-n=30
camera-fps-d=1
camera-v4l2-dev-node=4

[source3]
enable=1
type=3
uri=file:/home/rpt/Udentify/keas_trt_code/inputs/3.mp4
num-sources=1
gpu-id=0
cudadec-memtype=0

[sink0]
enable=1
type=2
sync=0
gpu-id=0
nvbuf-memory-type=0

[osd]
enable=1
gpu-id=0
border-width=5
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
live-source=0
batch-size=4
batched-push-timeout=40000
width=1920
height=1080
enable-padding=0
nvbuf-memory-type=0

[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_yoloV5.txt

[tests]
file-loop=0

Could you provide a bit more information on what exactly you want to achieve?

I want to calculate distances and redraw some data values based on B-boxes values.

@sharoseali
Copy link
Author

@joxis Any response, please?
Explaining a bit more I wanna use my YOLOV5 config file with python to extract boxes using deep stream. Meanwhile, I need two things to customize my detector output. 1 -Camera frame (I will draw a circle using open CV) and bounding boxes (x1 y1, x2, y2) to plot them differently. Sorry, if you don't understand my last comment.

@julestalloen
Copy link
Member

The current implementation does not support multiple video streams yet. Most code is already in place to make this possible but some final adjustments are needed. Did you already manage to get the boilerplate working with multiple streams? Or did you only use the standard sample application?

@sharoseali
Copy link
Author

sharoseali commented Sep 6, 2022

Meanwhile, I am still trying to manage multistream in the current code. I want to ask which specific things I have to edit in app/pipeline.py. Also the current implementation is with tracker but I don't need it for now. So how should I replace my pipeline without a tracker? Thanks

@julestalloen
Copy link
Member

The _create_source_bin function should already create a bin supporting multiple streams. You will, however, have to adjust how it is called (the index parameter is currently not used). An example can be found here.

With respect to the tracker, the easiest method is to just use the IOU Tracker. It is fast and should not impact performance too much. If you really want to remove the tracker you would have to remove this line. You can also subclass the base pipeline and overwrite the _create_tracker function to perform nothing.

@sharoseali
Copy link
Author

sharoseali commented Sep 8, 2022

HI I almost manage to arrange the code for multi streaming and is almost nearly to work (So I deleted my last comment).
But meanwhile running the code after changes i came across the issue in live camera streaming. As I want to read stream from USB cams that I connected with my Jetson NX kit, so I provide camera ids in this way:
['/dev/video3', '/dev/video2', '1.mp4', '2.mp4'] for 4 sources.
I got the following error in gstreamer pipeline. I dont know whats the reason behind that . if someone knows please share. Here are my output logs:


python3 run.py
INFO:app.Custom_pipeline.AnonymizationPipeline:Playing from URI None
INFO:app.Custom_pipeline.AnonymizationPipeline:Creating Pipeline
INFO:app.Custom_pipeline.AnonymizationPipeline:Creating Stream mux
INFO:app.Custom_pipeline.AnonymizationPipeline:Creating Source bin
INFO:app.Custom_pipeline.AnonymizationPipeline:Creating URI decode bin
INFO:app.Custom_pipeline.AnonymizationPipeline:Linking elements in the Pipeline: stream-muxer -> source-bin-00
INFO:app.Custom_pipeline.AnonymizationPipeline:Creating Source bin
INFO:app.Custom_pipeline.AnonymizationPipeline:Creating URI decode bin
INFO:app.Custom_pipeline.AnonymizationPipeline:Linking elements in the Pipeline: stream-muxer -> source-bin-00 -> source-bin-01
INFO:app.Custom_pipeline.AnonymizationPipeline:Creating Source bin
INFO:app.Custom_pipeline.AnonymizationPipeline:Creating URI decode bin
INFO:app.Custom_pipeline.AnonymizationPipeline:Linking elements in the Pipeline: stream-muxer -> source-bin-00 -> source-bin-01 -> source-bin-02
INFO:app.Custom_pipeline.AnonymizationPipeline:Creating Source bin
INFO:app.Custom_pipeline.AnonymizationPipeline:Creating URI decode bin
INFO:app.Custom_pipeline.AnonymizationPipeline:Linking elements in the Pipeline: stream-muxer -> source-bin-00 -> source-bin-01 -> source-bin-02 -> source-bin-03
INFO:app.Custom_pipeline.AnonymizationPipeline:Creating PGIE
INFO:app.Custom_pipeline.AnonymizationPipeline:Creating Converter 1
INFO:app.Custom_pipeline.AnonymizationPipeline:Creating Caps filter 1
INFO:app.Custom_pipeline.AnonymizationPipeline:Creating Tiler
INFO:app.Custom_pipeline.AnonymizationPipeline:Creating Converter 2
INFO:app.Custom_pipeline.AnonymizationPipeline:Creating Queue 1
INFO:app.Custom_pipeline.AnonymizationPipeline:Creating Converter 3
INFO:app.Custom_pipeline.AnonymizationPipeline:Creating Caps filter 2
INFO:app.Custom_pipeline.AnonymizationPipeline:Creating Encoder
INFO:app.Custom_pipeline.AnonymizationPipeline:Creating Parser
INFO:app.Custom_pipeline.AnonymizationPipeline:Creating Container
INFO:app.Custom_pipeline.AnonymizationPipeline:Creating Sink
INFO:app.Custom_pipeline.AnonymizationPipeline:Starting pipeline
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
Deserialize yoloLayer plugin: yolo
0:00:08.110955645  7949     0x301108d0 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/home/rpt/Udentify/DeepStream-Yolo/deepstream-python/deepstream/configs/model_b1_gpu0_fp32.engine
INFO: [Implicit Engine Info]: layers num: 5
0   INPUT  kFLOAT data            3x320x320       
1   OUTPUT kFLOAT num_detections  1               
2   OUTPUT kFLOAT detection_boxes 6375x4          
3   OUTPUT kFLOAT detection_scores 6375            
4   OUTPUT kFLOAT detection_classes 6375            

0:00:08.183567375  7949     0x301108d0 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /home/rpt/Udentify/DeepStream-Yolo/deepstream-python/deepstream/configs/model_b1_gpu0_fp32.engine
0:00:08.195152594  7949     0x301108d0 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:/home/rpt/Udentify/DeepStream-Yolo/deepstream-python/deepstream/app/../configs/deepstream_app_config_4_cam.txt sucessfully
Error: gst-resource-error-quark: Invalid URI "/dev/video3". (3): gsturidecodebin.c(1383): gen_source_element (): /GstPipeline:pipeline0/GstBin:source-bin-00/GstURIDecodeBin:uri-decode-bin

INFO:app.Custom_pipeline.AnonymizationPipeline:Exiting pipeline

Available camera ids showed by this command v4l2-ctl --list-devices is below

NVIDIA Tegra Video Input Device (platform:tegra-camrtc-ca):
	/dev/media0

Webcam C170: Webcam C170 (usb-3610000.xhci-2.1):
	/dev/video0
	/dev/video1
	/dev/media1

Webcam C170: Webcam C170 (usb-3610000.xhci-2.2):
	/dev/video2
	/dev/video3
	/dev/media2

One thing more I have changed here in _create_mp4_sink_bin function is
mp4_sink_bin.add_pad(Gst.GhostPad("sink", nvvidconv3.get_static_pad("sink"))) to
bin_pad = mp4_sink_bin.add_pad(Gst.GhostPad.new_no_target("src", Gst.PadDirection.SRC).

This is because i was getting error :

INFO:app.Custom_pipeline.AnonymizationPipeline:Creating Sink
Traceback (most recent call last):
  File "run.py", line 11, in <module>
    run_anonymization_pipeline(args.list)
  File "/home/rpt/Udentify/DeepStream-Yolo/deepstream-python/deepstream/app/core.py", line 32, in run_anonymization_pipeline
    pipeline = AnonymizationPipeline(
  File "/home/rpt/Udentify/DeepStream-Yolo/deepstream-python/deepstream/app/pipelines/anonymization.py", line 19, in __init__
    super().__init__(*args, **kwargs)
  File "/home/rpt/Udentify/DeepStream-Yolo/deepstream-python/deepstream/app/Custom_pipeline.py", line 107, in __init__
    self._create_elements()
  File "/home/rpt/Udentify/DeepStream-Yolo/deepstream-python/deepstream/app/Custom_pipeline.py", line 364, in _create_elements
    self.sink_bin = self._create_mp4_sink_bin()
  File "/home/rpt/Udentify/DeepStream-Yolo/deepstream-python/deepstream/app/Custom_pipeline.py", line 286, in _create_mp4_sink_bin
    mp4_sink_bin.add_pad(Gst.GhostPad("sink", nvvidconv3.get_static_pad("sink")))
TypeError: GObject.__init__() takes exactly 0 arguments (2 given)

So where I am exactly wrong? Any guidance...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants