Skip to content

Commit

Permalink
Merge pull request #13 from MistySOM/develop
Browse files Browse the repository at this point in the history
  • Loading branch information
matinlotfali authored Jul 5, 2023
2 parents f272b1a + 8630746 commit 5110965
Show file tree
Hide file tree
Showing 42 changed files with 799 additions and 60,180 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@
*.su
*.idb
*.pdb
.cache/

# Kernel Module Compile Results
*.mod*
Expand Down
5 changes: 4 additions & 1 deletion .idea/customTargets.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

11 changes: 9 additions & 2 deletions .idea/tools/External Tools.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

68 changes: 57 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,25 +4,21 @@

MistySOM RZV2L contains the DRPAI hardware module which is able to run artificial Neural Networks
with the focus on low power consumption. To check if this hardware module is present on your device,
you can look for both `/dev/drpai0` and `/dev/udmabuf0` devices on your linux shell.
you can look for both `/dev/drpai0` and `/dev/udmabuf0` devices on your Linux shell.
The Userland Direct Memory Access (UDMA) kernel module is required to provide the trained AI model and
the input image to the DRPAI hardware. After activating the hardware, it will use the trained model to
generate the output which can be read by the UDMA module. While DRPAI is running, the running thread will
go to sleep. Of course the sleep time varies based on the size of the AI model.
go to sleep. Of course, the sleep time varies based on the size of the AI model.

MistyWest team has prepared this plugin which can receive any kind of video input,
MistyWest team has prepared this GStreamer plugin which can receive any kind of video input,
such as a file (filesrc), a network stream (udpsrc), or a camera device (v4l2src) and outputs a video
with bounding boxes on inferred objects using the DRPAI. Later, this video can be linked to any kind of
output, such as the display (autovideosink), a network stream (udpsink), or a file (filesink).

![GStreamer DRPAI Plugin Chart](img/gst-drpai-chart.png)

**Note:** At this moment, the plugin is hardcoded to YOLOV2l model. Therefore, you need to have a copy
of the trained model ([link](models/yolov2))
with the directory name of `yolov2` inside your working directory for the plugin to work.

The plugin uses the following pad template capabilities for both **src** and **sink** which requires you
to prepare before your DRPAI element (for example, using a `videoconvert` element):
to prepare before the DRPAI element (for example, using a `videoconvert` element):

```
video/x-raw
Expand All @@ -35,6 +31,7 @@ The plugin also provides you with the following parameters:

| Name | Type | Default | Description |
|-----------------------|---------------------|--------:|----------------------------------------------------------------------|
| **model** | String | --- | The name of the pre-trained model and the directory prefix. |
| **multithread** | Boolean | true | Use a separate thread for object detection. |
| **log-detects** | Boolean | false | Print detected objects in standard output. |
| **show-fps** | Boolean | false | Render frame rates of video and DRPAI at the corner of the video. |
Expand All @@ -44,6 +41,55 @@ The plugin also provides you with the following parameters:
| **smooth-video-rate** | Float [1 - 1000] | 1 | Number of last video frame rates to average for a more smooth value. |
| **smooth-drpai-rate** | Float [1 - 1000] | 1 | Number of last DRPAI frame rates to average for a more smooth value. |

## AI Model

The plugin is implemented in a way that it can run different models. By using the `model` parameter,
you can switch between different DRP-AI translated models that are located in a directory with
the same name as the model. For example, when using the parameter `model=yolov3`, and you are running
the command on your home directory `/home/user`, the plugin loads the TVM compiled model located in
`/home/user/yolov3`.

### Post Processor Dynamic Library

Depending on the model you use, even though their input layers are the same, their output layers can be
very different and require additional post-processing to interpret the array of floating point numbers
to a data structure that is used to render the bounding boxes for each inferred object. Therefore,
the plugin uses a shared library that needs to be included with the model and its path is mentioned in
the `{model}/{model}_process_params.txt` file like this:
```
[dynamic_library]
libpostprocess-yolo.so
.
.
.
```

#### Yolo Post-Processor Library (libpostprocess-yolo.so)

The plugin already includes a post-processor library that supports `yolov2`, `yolov3`, `tinyyolov2`,
and `tinyyolov3` models. This post-processor library leverages many similarities between these models and
switches its behaviour based on other parameters that are mentioned in `{model}/{model}_process_params.txt`
file such as the `[best_class_prediction_algorithm]` and `[anchor_divide_size]`.

The library also loads the list of all class labels in `{model}/{model}_labels.txt` and the list of all
box anchors in `{model}/{model}_anchors.txt`. This means these 3 files need to be manually included
alongside the output of the DRPAI TVM translator.

#### Make your own Post-Processor Library

If you want to use a model that is not following the output layer format for Yolo models, you can write
your own post-processor library with the exact function signatures that are mentioned in
`src/dynamic-post-process/postprocess.h` include file. These functions are:
```
// Executed once at the begining and at the end
int8_t post_process_initialize(const char model_prefix[], uint32_t output_len);
int8_t post_process_release();
// Executed after DRP-AI output is ready.
// Generating an array of `detection` structure as defined in `src/box.h`.
int8_t post_process_output(const float output_buf[], struct detection det[], uint8_t *det_len);
```

## How to Build

Configure and build the repository (the sample application and DRPAI plugin) as such:
Expand Down Expand Up @@ -76,15 +122,15 @@ You can also check if it has been built correctly with:
```
gst-launch-1.0 v4l2src device=/dev/video0 \
! videoconvert \
! drpai show-fps=true log-detects=true smooth-video-rate=30 \
! drpai model=yolov3 show-fps=true log-detects=true smooth-video-rate=30 \
! videoconvert \
! autovideosink
```
If your camera supports the BGR format (such as the coral camera), you can modify the camera size in
`~/v4l2init.sh` and skip the first `videoconvert` element like this:
```
gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw, width=640, height=480, format=BGR \
! drpai show-fps=true log-detects=true smooth-video-rate=30 \
! drpai model=yolov3 show-fps=true log-detects=true smooth-video-rate=30 \
! videoconvert \
! autovideosink
```
Expand All @@ -101,7 +147,7 @@ add the drpai element to the `stream.sh` file like this:
echo "Streaming to ${1} with DRPAI..."
gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw, width=640, height=480, format=BGR \
! drpai show-fps=true log-detects=true smooth-video-rate=30 \
! drpai model=yolov3 show-fps=true log-detects=true smooth-video-rate=30 \
! vspmfilter dmabuf-use=true ! video/x-raw, format=NV12 \
! omxh264enc control-rate=2 target-bitrate=10485760 interval_intraframes=14 periodicty-idr=2 \
! video/x-h264,profile=\(string\)high,level=\(string\)4.2 \
Expand Down
13 changes: 6 additions & 7 deletions bitbake-recipe.bb
Original file line number Diff line number Diff line change
Expand Up @@ -13,16 +13,15 @@ DEPENDS = "gstreamer1.0 gstreamer1.0-plugins-base drpai"
S = "${WORKDIR}/git"
PV = "1.0"

do_install_append() {
install -d ${D}${ROOT_HOME}/yolov2
install -m 0755 ${S}/models/yolov2/* ${D}${ROOT_HOME}/yolov2
}

FILES_${PN} = "${libdir}/gstreamer-1.0/libgstdrpai.so ${ROOT_HOME}/yolov2"
FILES_${PN} = "${libdir}/gstreamer-1.0/libgstdrpai.so"
FILES_${PN}-dev = "${libdir}/gstreamer-1.0/libgstdrpai.la"
FILES_${PN}-staticdev = "${libdir}/gstreamer-1.0/libgstdrpai.a"
FILES_${PN}-dbg = " \
${libdir}/gstreamer-1.0/.debug \
${prefix}/src"

RDEPENDS_${PN} = "gstreamer1.0 gstreamer1.0-plugins-base kernel-module-udmabuf"
PACKAGES += "${PN}-postprocess-yolo"

FILES_${PN}-postprocess-yolo = "${libdir}/libpostprocess-yolo.so"

RDEPENDS_${PN} = "gstreamer1.0 gstreamer1.0-plugins-base kernel-module-udmabuf ${PN}-postprocess-yolo"
41 changes: 7 additions & 34 deletions gst-plugin/meson.build
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
plugin_c_args = ['-DHAVE_CONFIG_H', '-DYOLOV2', '-lpthread']
plugin_c_args = ['-DHAVE_CONFIG_H']

cdata = configuration_data()
cdata.set_quoted('PACKAGE_VERSION', gst_version)
Expand All @@ -9,48 +9,21 @@ cdata.set_quoted('GST_PACKAGE_NAME', 'GStreamer DRP-AI Plug-in')
cdata.set_quoted('GST_PACKAGE_ORIGIN', 'https://mistysom.com')
configure_file(output : 'config.h', configuration : cdata)

### gstaudio_dep = dependency('gstreamer-audio-1.0',
### fallback: ['gst-plugins-base', 'audio_dep'])

# Plugin 1
plugin_sources = [
'src/gstdrpai.cpp',
'src/drpai.cpp',
'src/image.cpp',
'src/box.cpp'
]
'src/box.cpp',
'src/dynamic-post-process/postprocess.cpp'
]

gstplugindrpai = library('gstdrpai',
plugin_sources,
c_args: plugin_c_args,
cpp_args: plugin_c_args,
dependencies : [gst_dep, pthread_dep],
dependencies : [gst_dep, thread_dep, dl_dep],
install : true,
install_dir : plugins_install_dir,
install_dir : join_paths('/usr/lib64', 'gstreamer-1.0'),
)

# Plugin 2 (audio filter example)
### audiofilter_sources = [
### 'src/gstaudiofilter.c',
### ]

### gstaudiofilterexample = library('gstaudiofilterexample',
### audiofilter_sources,
### c_args: plugin_c_args,
### dependencies : [gst_dep, gstaudio_dep],
### install : true,
### install_dir : plugins_install_dir,
### )

# The TEMPLATE Plugin
### gstTEMPLATE_sources = [
### 'src/gstTEMPLATE.c',
### ]

###gstTEMPLATEexample = library('gstTEMPLATE',
### gstTEMPLATE_sources,
### c_args: plugin_c_args,
### dependencies : [gst_dep, gstbase_dep],
### install : true,
### install_dir : plugins_install_dir,
###)
subdir('src/dynamic-post-process/yolo')
8 changes: 3 additions & 5 deletions gst-plugin/src/box.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -100,14 +100,12 @@ float box_iou(Box a, Box b)
* th_nms = threshold for nms
* Return value : -
******************************************/
void filter_boxes_nms(std::vector<detection> &det, float th_nms)
void filter_boxes_nms(detection det[], uint8_t size, float th_nms)
{
std::size_t count = det.size();

for (std::size_t i = 0; i < count; i++)
for (uint8_t i = 0; i < size; i++)
{
Box a = det[i].bbox;
for (std::size_t j = 0; j < count; j++)
for (uint8_t j = 0; j < size; j++)
{
if (i == j)
{
Expand Down
10 changes: 4 additions & 6 deletions gst-plugin/src/box.h
Original file line number Diff line number Diff line change
Expand Up @@ -25,10 +25,7 @@
#ifndef BOX_H
#define BOX_H

#include <vector>
#include <cstdio>
#include <cmath>
#include <cstdlib>
#include <cinttypes>

/*****************************************
* Box : Bounding box coordinates and its size
Expand All @@ -44,8 +41,9 @@ typedef struct
typedef struct detection
{
Box bbox;
int32_t c;
uint32_t c;
float prob;
const char* name;
} detection;

/*****************************************
Expand All @@ -55,6 +53,6 @@ float box_iou(Box a, Box b);
float overlap(float x1, float w1, float x2, float w2);
float box_intersection(Box a, Box b);
float box_union(Box a, Box b);
void filter_boxes_nms(std::vector<detection> &det, float th_nms);
void filter_boxes_nms(detection det[], uint8_t size, float th_nms);

#endif
Loading

0 comments on commit 5110965

Please sign in to comment.