diff --git a/Makefile b/Makefile
index e1b7382e..63ff8bc8 100644
--- a/Makefile
+++ b/Makefile
@@ -39,7 +39,7 @@ else
endif
CXX_VERSION:=c++17
-DSL_VERSION:='L"v0.31.a.alpha"'
+DSL_VERSION:='L"v0.31.b.alpha"'
GLIB_VERSION:=2.0
GSTREAMER_VERSION:=1.0
diff --git a/Release Notes/dsl-releases.md b/Release Notes/dsl-releases.md
index 7a89ec2f..e4a52e63 100644
--- a/Release Notes/dsl-releases.md
+++ b/Release Notes/dsl-releases.md
@@ -2,6 +2,7 @@
| Release | Date |
| ----------------------------------------------------------- | ----------- |
+| [v0.31.b.alpha (patch)](/Release%20Notes/v0.31.b.alpha.md) | 11/02/2024 |
| [v0.31.a.alpha (patch)](/Release%20Notes/v0.31.a.alpha.md) | 09/16/2024 |
| [v0.31.alpha](/Release%20Notes/v0.3`.alpha.md) | 09/04/2024 |
| [v0.30.b.alpha (patch)](/Release%20Notes/v0.30.b.alpha.md) | 08/28/2024 |
diff --git a/Release Notes/v0.31.b.alpha.md b/Release Notes/v0.31.b.alpha.md
new file mode 100644
index 00000000..8083df59
--- /dev/null
+++ b/Release Notes/v0.31.b.alpha.md
@@ -0,0 +1,20 @@
+# v0.31.b.alpha (patch) Release Notes
+**Important!**
+* `v0.31.b.alpha` is a **patch** release (patch `b` for the `v0.31.alpha` release).
+
+## Purpose
+The purpose for this patch release:
+1. Remove the invalid guard preventing dynamic ["on-the-fly"](https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_on_the_fly_model.html) model engine updates. Support dynamicl model engine updates with a new "update-complete" listener callback function.
+2. Update the "Add IoT Message Meta Action" to add the Trigger name to identify the source of the event.
+
+## Issues-bugs closed in this release
+* The GIE and TIS "config-file-path" and "model-engine-file" properties should be writable in any state [#1295](https://github.com/prominenceai/deepstream-services-library/issues/1295)
+* Add IoT Message Meta Action must add the Trigger Name in the NvDsEventMsgMeta to identify the event source. [#1298](https://github.com/prominenceai/deepstream-services-library/issues/1298)
+
+## Issues-enhancements closed in this release
+* Implement dsl_infer_gie_model_update_listener_add/remove services for async model update notifications [#1297](https://github.com/prominenceai/deepstream-services-library/issues/1297)
+* Implement new dynamic "on-the-fly" model-engine update examples using new update-listener callback services. [#1299](https://github.com/prominenceai/deepstream-services-library/issues/1297)
+
+## New Examples in this release
+* [dynamically_update_inference_model.py](/examples/python/dynamically_update_inference_model.py)
+* [dynamically_update_inference_model.cpp](/examples/cpp/dynamically_update_inference_model.cpp)
diff --git a/docs/api-infer.md b/docs/api-infer.md
index 80fc2acc..81296a38 100644
--- a/docs/api-infer.md
+++ b/docs/api-infer.md
@@ -1,5 +1,5 @@
# Primary and Secondary Inference API Reference
-The DeepStream Services Library (DSL) provides services for Nvidia's two Inference Plugins; the [GST Inference Engine (GIE)](https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html#gst-nvinfer) and the [Triton Inference Server (TIS)](https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinferserver.html#gst-nvinferserver).
+The DeepStream Services Library (DSL) provides services for Nvidia's two Inference Plugins; the [GST Inference Engine (GIE)](https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html#gst-nvinfer) and the [Triton Inference Server (TIS)](https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinferserver.html#gst-nvinferserver).
Pipelines can have multiple Primary GIE or TIS -- linked in succession to operate on the full frame -- with any number of corresponding Secondary GIEs or TISs (only limited by hardware). Pipelines cannot be created with a mix of GIEs and TISs. Pipelines that have secondary GIEs/TISs but no Primary GIE/TIS will fail to Link and Play. Secondary GIEs/TISs can `infer-on` both Primary and Secondary GIEs/TISs creating multiple levels of inference. **IMPORTANT**: the current release supports up to two levels of secondary inference.
@@ -7,10 +7,15 @@ Pipelines can have multiple Primary GIE or TIS -- linked in succession to operat
Primary GIEs and TISs are constructed by calling [`dsl_infer_gie_primary_new`](#dsl_infer_gie_primary_new) and [`dsl_infer_tis_primary_new`](#dsl_infer_tis_primary_new) respectively. Secondary GIEs and TISs are created by calling [`dsl_infer_gie_secondary_new`](#dsl_infer_gie_secondary_new) and [`dsl_infer_tis_secondary_new`](#dsl_infer_tis_secondary_new) respectively. As with all components, Primary and Secondary GIEs/TISs must be uniquely named from all other components created. All GIEs and TIEs are deleted by calling [`dsl_component_delete`](api-component.md#dsl_component_delete), [`dsl_component_delete_many`](api-component.md#dsl_component_delete_many), or [`dsl_component_delete_all`](api-component.md#dsl_component_delete_all).
## Inference Configuration
-Both GIEs and TIEs require a Primary or Secondary **Inference Configuration File**. Once created, clients can query both Primary and Secondary GIEs/TIEs for their Config File in-use by calling [`dsl_infer_config_file_get`](#dsl_infer_config_file_get) or change the GIE/TIS's configuration by calling [`dsl_infer_config_file_set`](#dsl_infer_config_file_set).
+Both GIEs and TIS s require a Primary or Secondary **Inference Configuration File**. Once created, clients can query both Primary and Secondary GIEs/TIEs for their Config File in-use by calling [`dsl_infer_config_file_get`](#dsl_infer_config_file_get) or change the GIE/TIS's configuration by calling [`dsl_infer_config_file_set`](#dsl_infer_config_file_set).
## Model Engine Files
-GIEs support the specification of a pre-built **Model Engine File**, or one can allow the Plugin to create the model engine based on the configuration. The file in use can be queried by calling [`dsl_infer_gie_model_engine_file_get`](#dsl_infer_gie_model_engine_file_get) or changed with [`dsl_infer_gie_model_engine_file_set`](#dsl_infer_gie_model_engine_file_set).
+With Primary and Secondary TISs, the model-engine-file must be specified in the inference-configuration-file. The model-engine-file can be updated at runtime by calling [`dsl_infer_config_file_set`](#dsl_infer_config_file_set). Refer to [Dynamic Model Updates](#dynamic-model-updates) below.
+
+With Primary and Secondary GIE, the model-engine-file, can be specified in the inference-configuration-file or by using the constructor's `model_engine_file` parameter. The file in use can be queried by calling [`dsl_infer_gie_model_engine_file_get`](#dsl_infer_gie_model_engine_file_get) or changed with [`dsl_infer_gie_model_engine_file_set`](#dsl_infer_gie_model_engine_file_set).
+
+## Dynamic Model Updates.
+Both GIEs and TISs support dynamic model updates See NVIDIA's [on-the-fly model updates](https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_on_the_fly_model.html) for information on restrictions. With Primary and Secondary GIEs, clients can register a [model-update-listener](#dsl_infer_gie_model_update_listener_cb) callback function to be notified when a new model-engine is successfully loaded. See [`dsl_infer_gie_model_update_listener_add`](#dsl_infer_gie_model_update_listener_add), not applicable to TISs.
## Unique Id
**IMPORTANT!** DSL explicitly assigns each GIE or TISs a unique component id overriding the (optional) parameter in the inference config file. The unique component id is derived from the first available unused id starting with 1, meaning the first component will be assigned id 1, the second id 2 and so on. The id will be reused if the inference component is deleted and a new one created. The value assigned to the GIE or TIS can be queried by calling [`dsl_infer_unique_id_get`](#dsl_infer_unique_id_get). All Object metadata structures created by the named GIE/TIE will include a `unique_component_id` field assigned with this id.
@@ -34,20 +39,28 @@ Multiple sink (input) and/or source (output) [Pad-Probe Handlers](/docs/api-pph.
---
## Primary and Secondary Inference API
+**Client Callback Typedefs**
+* [`dsl_infer_gie_model_update_listener_cb`](#dsl_infer_gie_model_update_listener_cb)
+
**Constructors**
* [`dsl_infer_gie_primary_new`](#dsl_infer_gie_primary_new)
* [`dsl_infer_gie_secondary_new`](#dsl_infer_gie_secondary_new)
* [`dsl_infer_tis_primary_new`](#dsl_infer_tis_primary_new)
* [`dsl_infer_tis_secondary_new`](#dsl_infer_tis_secondary_new)
-**Methods**
-* [`dsl_infer_batch_size_get`](#dsl_infer_batch_size_get)
-* [`dsl_infer_batch_size_set`](#dsl_infer_batch_size_set)
-* [`dsl_infer_unique_id_get`](#dsl_infer_unique_id_get)
+
+**Inference Engine (PGIE & SGIE) Methods**
* [`dsl_infer_gie_model_engine_file_get`](#dsl_infer_gie_model_engine_file_get)
* [`dsl_infer_gie_model_engine_file_set`](#dsl_infer_gie_model_engine_file_set)
+* [`dsl_infer_gie_model_update_listener_add`](#dsl_infer_gie_model_update_listener_add)
+* [`dsl_infer_gie_model_update_listener_remove`](#dsl_infer_gie_model_update_listener_remove)
* [`dsl_infer_gie_tensor_meta_settings_get`](#dsl_infer_gie_tensor_meta_settings_get)
* [`dsl_infer_gie_tensor_meta_settings_set`](#dsl_infer_gie_tensor_meta_settings_set)
+
+**Common Methods**
+* [`dsl_infer_batch_size_get`](#dsl_infer_batch_size_get)
+* [`dsl_infer_batch_size_set`](#dsl_infer_batch_size_set)
+* [`dsl_infer_unique_id_get`](#dsl_infer_unique_id_get)
* [`dsl_infer_config_file_get`](#dsl_infer_config_file_get)
* [`dsl_infer_config_file_set`](#dsl_infer_config_file_set)
* [`dsl_infer_interval_get`](#dsl_infer_interval_get)
@@ -75,8 +88,28 @@ The following return codes are used by the Inference API
#define DSL_RESULT_INFER_PAD_TYPE_INVALID 0x0006000B
#define DSL_RESULT_INFER_COMPONENT_IS_NOT_INFER 0x0006000C
#define DSL_RESULT_INFER_OUTPUT_DIR_DOES_NOT_EXIST 0x0006000D
+#define DSL_RESULT_INFER_ID_NOT_FOUND 0x0006000E
+#define DSL_RESULT_INFER_CALLBACK_ADD_FAILED 0x0006000F
+#define DSL_RESULT_INFER_CALLBACK_REMOVE_FAILED 0x00060010
```
+---
+
+## Client Callback Typedefs
+### *dsl_infer_gie_model_update_listener_cb*
+```C++
+typedef void (*dsl_infer_gie_model_update_listener_cb)(const wchar_t* name,
+ const wchar_t* model_engine_file, void* client_data);
+```
+Callback typedef for a client model-update listener. Functions of this type are added to a Primary or Secondary Inference Engine by calling [dsl_infer_gie_model_update_listener_add](#dsl_infer_gie_model_update_listener_add). Once added, the function will be called each time a new model-engine has been successfully loaded while the Pipeline is in a state of playing.
+
+**Parameters**
+* `name` - [in] name of the Primary or Secondary Inference Component that loaded the model-engine.
+* `model_engine_file` - [in] one of [DSL_PIPELINE_STATE](#DSL_PIPELINE_STATE) constants for the new pipeline state
+* `client_data` - [in] opaque pointer to the client's user data provided to the Inference Component when this function is added.
+
+
+
## Constructors
**Python Example**
```Python
@@ -91,20 +124,20 @@ sgie_model_file = './models/Secondary_CarColor/resnet18.caffemodel.engine'
# New Primary GIE using the filespecs above, with interval set to 0
retval = dsl_infer_gie_primary_new('pgie', pgie_config_file, pgie_model_file, 0)
if retval != DSL_RETURN_SUCCESS:
- print(retval)
- # handle error condition
+ print(retval)
+ # handle error condition
# New Secondary GIE set to Infer on the Primary GIE defined above
retval = dsl_infer_gie_seondary_new('sgie', sgie_config_file, sgie_model_file, 0, 'pgie')
if retval != DSL_RETURN_SUCCESS:
- print(retval)
- # handle error condition
+ print(retval)
+ # handle error condition
# Add both Primary and Secondary GIEs to an existing Pipeline
retval = dsl_pipeline_component_add_many('pipeline', ['pgie', 'sgie', None])
if retval != DSL_RETURN_SUCCESS:
- print(retval)
- # handle error condition
+ print(retval)
+ # handle error condition
```
@@ -112,7 +145,7 @@ if retval != DSL_RETURN_SUCCESS:
### *dsl_infer_gie_primary_new*
```C++
DslReturnType dsl_infer_gie_primary_new(const wchar_t* name, const wchar_t* infer_config_file,
- const wchar_t* model_engine_file, uint interval);
+ const wchar_t* model_engine_file, uint interval);
```
This constructor creates a uniquely named Primary GST Inference Engine (GIE). Construction will fail if the name is currently in use.
@@ -135,7 +168,7 @@ retval = dsl_infer_gie_primary_new('my-pgie', pgie_config_file, pgie_model_file,
### *dsl_infer_gie_secondary_new*
```C++
DslReturnType dsl_infer_gie_secondary_new(const wchar_t* name, const wchar_t* infer_config_file,
- const wchar_t* model_engine_file, const wchar_t* infer_on_gie, uint interval);
+ const wchar_t* model_engine_file, const wchar_t* infer_on_gie, uint interval);
```
This constructor creates a uniquely named Secondary GST Inference Engine (GIE). Construction will fail if the name is currently in use.
@@ -160,7 +193,7 @@ retval = dsl_infer_gie_seondary_new('my-sgie', sgie_config_file, sgie_model_file
### *dsl_infer_tis_primary_new*
```C++
DslReturnType dsl_infer_tis_primary_new(const wchar_t* name,
- const wchar_t* infer_config_file, uint interval);
+ const wchar_t* infer_config_file, uint interval);
```
This constructor creates a uniquely named Primary Triton Inference Server (TIS). Construction will fail if the name is currently in use.
@@ -182,7 +215,7 @@ retval = dsl_infer_tis_primary_new('my-ptis', ptis_config_file, 0)
### *dsl_infer_tis_secondary_new*
```C++
DslReturnType dsl_infer_tis_secondary_new(const wchar_t* name, const wchar_t* infer_config_file,
- const wchar_t* infer_on_tis, uint interval);
+ const wchar_t* infer_on_tis, uint interval);
```
This constructor creates a uniquely named Secondary Triton Inference Server (TIS). Construction will fail if the name is currently in use.
@@ -205,197 +238,265 @@ retval = dsl_infer_tis_seondary_new('my-stis', stis_config_file, 0, 'my-ptis')
---
-## Methods
-### *dsl_infer_batch_size_get*
+
+## Inference Engine (PGIE & SGIE) Methods
+
+### *dsl_infer_gie_model_engine_file_get*
```C++
-DslReturnType dsl_infer_batch_size_get(const wchar_t* name, uint* size);
+DslReturnType dsl_infer_gie_model_engine_file_get(const wchar_t* name,
+ const wchar_t** model_engine_file);
```
-This service gets the client defined batch-size setting for the named GIE or TIS. If not set (0-default), the Pipeline will set the batch-size to the same as the Streammux batch-size which - by default - is derived from the number of sources when the Pipeline is called to play. The Streammux batch-size can be set (overridden) by calling [`dsl_pipeline_streammux_batch_properties_set`](/docs/api-pipeline.md#dsl_pipeline_streammux_batch_properties_set).
+The service returns the current Model Engine file in use by the named Primary or Secondary GIE.
+This service is not applicable for Primary or Secondary TISs
**Parameters**
-* `name` - [in] unique name of the Primary or Secondary GIE or TIS to query.
-* `size` - [out] returns the client defined batch size for the named GIE or TIS if set. ). 0 otherwise.
+* `name` - unique name of the Primary or Secondary GIE to query.
+* `model_engine_file` - [out] returns the absolute file path/name for the model engine file in use
**Returns**
`DSL_RESULT_SUCCESS` on success. One of the [Return Values](#return-values) defined above on failure
**Python Example**
```Python
-retval, batch_size = dsl_infer_batch_size_get('my-pgie')
+retval, model_engine_file = dsl_infer_gie_model_engine_file_get('my-sgie')
```
-### *dsl_infer_batch_size_set*
+### *dsl_infer_gie_model_engine_file_set*
```C++
-DslReturnType dsl_infer_batch_size_set(const wchar_t* name, uint size);
+DslReturnType dsl_infer_gie_model_engine_file_set(const wchar_t* name,
+ const wchar_t* model_engine_file);
```
-This service sets the client defined batch-size setting for the named GIE or TIS. If not set (0-default), the Pipeline will set the batch-size to the same as the Streammux batch-size which - by default - is derived from the number of sources when the Pipeline is called to play. The Streammux batch-size can be set (overridden) by calling [`dsl_pipeline_streammux_batch_properties_set`](/docs/api-pipeline.md#dsl_pipeline_streammux_batch_properties_set).
+The service sets the model-engine-file for the named Primary or Secondary GIE to use.
+
+This service is not applicable for Primary or Secondary TISs.
+
+**IMPORTANT!** This service can be called in any Pipeline state. [On-the-fly](https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_on_the_fly_model.html#) model updates are performed asynchronously. Clients can register a [model-update-listener](#dsl_infer_gie_model_update_listener_cb) callback function to notified when the new model-engine is successfully loaded. See [`dsl_infer_gie_model_update_listener_add`](#dsl_infer_gie_model_update_listener_add).
**Parameters**
-* `name` - [in] unique name of the Primary or Secondary GIE or TIS to query.
-* `size` - [in] the new client defined batch size for the named GIE or TIS to use. Set to 0 to unset.
+* `name` - unique name of the Primary or Secondary GIE to update.
+* `model_engine_file` - [in] relative or absolute file path/name for the model engine file to load
**Returns**
-`DSL_RESULT_SUCCESS` on success. One of the [Return Values](#return-values) defined above on failure
+`DSL_RESULT_SUCCESS` if the GIE exists, and the model_engine_file was found, one of the
+[Return Values](#return-values) defined above on failure
**Python Example**
```Python
-retval = dsl_infer_batch_size_get('my-pgie', 4)
+retval = dsl_infer_gie_model_engine_file_set('my-sgie',
+ './test/models/Secondary_CarColor/resnet18.caffemodel_b16_fp16.engine"')
```
-### *dsl_infer_unique_id_get*
+### *dsl_infer_gie_model_update_listener_add*
```C++
-DslReturnType dsl_infer_unique_id_get(const wchar_t* name, uint* id);
+DslReturnType dsl_infer_gie_model_update_listener_add(const wchar_t* name,
+ dsl_infer_gie_model_update_listener_cb listener, void* client_data);
```
-This service queries the named Primary or Secondary GIE or TIS for its unique id derived from its unique name.
+The service adds a [model-update-listener](#dsl_infer_gie_model_update_listener_cb) callback function to a named Primary or Secondary GIE. The callback will be called after a new model-engine-file has been successfully loaded while the Pipeline is in a state of playing.
+
+This service is not applicable for Primary or Secondary TISs.
**Parameters**
-* `name` - [in] unique name of the Primary or Secondary GIE or TIS to query.
-* `id` - [out] returns the unique id for the named GIE or TIS
+* `name` - unique name of the Primary or Secondary GIE to update.
+* `listener` - [in] client callback function to add.
+* `client_data` - [in] opaque pointer to client data returned to the listener callback function.
**Returns**
-`DSL_RESULT_SUCCESS` on success. One of the [Return Values](#return-values) defined above on failure
+`DSL_RESULT_SUCCESS` on success. One of the [Return Values](#return-values) defined above on failure.
**Python Example**
```Python
-retval, id = dsl_infer_unique_id_get('my-pgie')
+##
+# Function to be called when a model update has been completed
+#
+def model_update_listener(name, model_engine_file, client_data):
+ print(name, "completed loading model", model_engine_file)
+
+retval = dsl_infer_gie_model_update_listener_add('my-pgie',
+ model_update_listener, None)
```
-### *dsl_infer_config_file_get*
+### *dsl_infer_gie_model_update_listener_remove*
```C++
-DslReturnType dsl_infer_config_file_get(const wchar_t* name,
- const wchar_t** infer_config_file);
+DslReturnType dsl_infer_gie_model_update_listener_remove(const wchar_t* name,
+ dsl_infer_gie_model_update_listener_cb listener);
```
+This service removes a [model-update-listener](#dsl_infer_gie_model_update_listener_cb) callback function from a named Primary or Secondary GIE.
-This service returns the current Inference Config file in use by the named Primary or Secondary GIE or TIS.
+This service is not applicable for Primary or Secondary TISs
**Parameters**
-* `name` - [in] unique name of the Primary or Secondary GIE or TIS to query.
-* `infer_config_file` - [out] returns the absolute file path/name for the infer config file in use
+* `name` - unique name of the Primary or Secondary GIE to update.
+* `listener` - [in] client callback function to remove.
**Returns**
-`DSL_RESULT_SUCCESS` if successful. One of the [Return Values](#return-values) defined above on failure.
+`DSL_RESULT_SUCCESS` on success. One of the [Return Values](#return-values) defined above on failure.
**Python Example**
```Python
-retval, infer_config_file = dsl_infer_config_file_get('my-sgie)
+retval = dsl_infer_gie_model_update_listener_remove('my-pgie',
+ model_update_listener)
```
-### *dsl_infer_config_file_set*
+### *dsl_infer_gie_tensor_meta_settings_get*
```C++
-DslReturnType dsl_infer_config_file_set(const wchar_t* name,
- const wchar_t* infer_config_file);
+DslReturnType dsl_infer_gie_tensor_meta_settings_get(const wchar_t* name,
+ boolean* input_enabled, boolean* output_enabled);
```
-
-This service set the Inference Config file to use by the named Primary or Secondary GIE or TIS.
+The service gets the current input and output tensor-meta settings in use by the named Primary or Secondary GIE.
**Parameters**
-* `name` - unique name of the Primary or Secondary GIE of TIS to update.
-* `infer_config_file` - [in] relative or absolute file path/name for the infer config file to load
+* `name` - unique name of the Primary or Secondary GIE to query.
+* `input_enabled` - [out] if true, the GIE will preprocess input tensors attached as metadata instead of preprocessing inside the plugin, false otherwise.
+* `output_enable` - [out] if true, the GIE will attach tensor outputs as metadata on the GstBuffer.
**Returns**
-`DSL_RESULT_SUCCESS` if successful. One of the [Return Values](#return-values) defined above on failure.
+`DSL_RESULT_SUCCESS` on success. One of the [Return Values](#return-values) defined above on failure
**Python Example**
```Python
-retval, dsl_infer_config_file_set('my-pgie', './configs/config_infer_primary_nano.txt')
+retval, input_enabled, output_enabled = dsl_infer_gie_tensor_meta_settings_get('my-pgie')
```
-
-### *dsl_infer_gie_model_engine_file_get*
+### *dsl_infer_gie_tensor_meta_settings_set*
```C++
-DslReturnType dsl_infer_gie_model_engine_file_get(const wchar_t* name,
- const wchar_t** model_engine_file);
+DslReturnType dsl_infer_gie_tensor_meta_settings_set(const wchar_t* name,
+ boolean input_enabled, boolean output_enabled);
```
-The service returns the current Model Engine file in use by the named Primary or Secondary GIE.
-This serice is not applicable for Primary or Secondary TISs
+The service sets the input amd output tensor-meta settings for the named Primary or Secondary GIE.
**Parameters**
* `name` - unique name of the Primary or Secondary GIE to query.
-* `model_engine_file` - [out] returns the absolute file path/name for the model engine file in use
+* `input_enabled` - [in] set to true to have the GIE preprocess input tensors attached as metadata instead of preprocessing inside the plugin, false otherwise.
+* `output_enable` - [in] set to true to have the GIE attach tensor outputs as metadata on the GstBuffer.
**Returns**
`DSL_RESULT_SUCCESS` on success. One of the [Return Values](#return-values) defined above on failure
**Python Example**
```Python
-retval, model_engine_file = dsl_infer_gie_model_engine_file_get('my-sgie')
+retval = dsl_infer_gie_tensor_meta_settings_get('my-pgie', True, False)
```
-### *dsl_infer_gie_model_engine_file_set*
+---
+
+## Common Methods
+
+### *dsl_infer_batch_size_get*
```C++
-DslReturnType dsl_infer_gie_model_engine_file_set(const wchar_t* name,
- const wchar_t* model_engine_file);
+DslReturnType dsl_infer_batch_size_get(const wchar_t* name, uint* size);
```
-The service sets the Model Engine file to use for the named Primary or Secondary GIE.
-This service is not applicable for Primary or Secondary TISs
+This service gets the client defined batch-size setting for the named GIE or TIS. If not set (0-default), the Pipeline will set the batch-size to the same as the Streammux batch-size which - by default - is derived from the number of sources when the Pipeline is called to play. The Streammux batch-size can be set (overridden) by calling [`dsl_pipeline_streammux_batch_properties_set`](/docs/api-pipeline.md#dsl_pipeline_streammux_batch_properties_set).
**Parameters**
-* `name` - unique name of the Primary or Secondary GIE to update.
-* `model_engine_file` - [in] relative or absolute file path/name for the model engine file to load
+* `name` - [in] unique name of the Primary or Secondary GIE or TIS to query.
+* `size` - [out] returns the client defined batch size for the named GIE or TIS if set. ). 0 otherwise.
**Returns**
-`DSL_RESULT_SUCCESS` if the GIE exists, and the model_engine_file was found, one of the
-[Return Values](#return-values) defined above on failure
+`DSL_RESULT_SUCCESS` on success. One of the [Return Values](#return-values) defined above on failure
+
**Python Example**
```Python
-retval = dsl_infer_gie_model_engine_file_set('my-sgie',
- './test/models/Secondary_CarColor/resnet18.caffemodel_b16_fp16.engine"')
+retval, batch_size = dsl_infer_batch_size_get('my-pgie')
```
-### *dsl_infer_gie_tensor_meta_settings_get*
+### *dsl_infer_batch_size_set*
```C++
-DslReturnType dsl_infer_gie_tensor_meta_settings_get(const wchar_t* name,
- boolean* input_enabled, boolean* output_enabled);
+DslReturnType dsl_infer_batch_size_set(const wchar_t* name, uint size);
```
-The service gets the current input and output tensor-meta settings in use by the named Primary or Secondary GIE.
+This service sets the client defined batch-size setting for the named GIE or TIS. If not set (0-default), the Pipeline will set the batch-size to the same as the Streammux batch-size which - by default - is derived from the number of sources when the Pipeline is called to play. The Streammux batch-size can be set (overridden) by calling [`dsl_pipeline_streammux_batch_properties_set`](/docs/api-pipeline.md#dsl_pipeline_streammux_batch_properties_set).
**Parameters**
-* `name` - unique name of the Primary or Secondary GIE to query.
-* `input_enabled` - [out] if true, the GIE will preprocess input tensors attached as metadata instead of preprocessing inside the plugin, false otherwise.
-* `output_enable` - [out] if true, the GIE will attach tensor outputs as metadata on the GstBuffer.
+* `name` - [in] unique name of the Primary or Secondary GIE or TIS to query.
+* `size` - [in] the new client defined batch size for the named GIE or TIS to use. Set to 0 to unset.
**Returns**
`DSL_RESULT_SUCCESS` on success. One of the [Return Values](#return-values) defined above on failure
**Python Example**
```Python
-retval, input_enabled, output_enabled = dsl_infer_gie_tensor_meta_settings_get('my-pgie')
+retval = dsl_infer_batch_size_get('my-pgie', 4)
```
-### *dsl_infer_gie_tensor_meta_settings_set*
+
+### *dsl_infer_unique_id_get*
```C++
-DslReturnType dsl_infer_gie_tensor_meta_settings_set(const wchar_t* name,
- boolean input_enabled, boolean output_enabled);
+DslReturnType dsl_infer_unique_id_get(const wchar_t* name, uint* id);
```
-The service sets the input amd output tensor-meta settings for the named Primary or Secondary GIE.
+This service queries the named Primary or Secondary GIE or TIS for its unique id derived from its unique name.
+
**Parameters**
-* `name` - unique name of the Primary or Secondary GIE to query.
-* `input_enabled` - [in] set to true to have the GIE preprocess input tensors attached as metadata instead of preprocessing inside the plugin, false otherwise.
-* `output_enable` - [in] set to true to have the GIE attach tensor outputs as metadata on the GstBuffer.
+* `name` - [in] unique name of the Primary or Secondary GIE or TIS to query.
+* `id` - [out] returns the unique id for the named GIE or TIS
**Returns**
`DSL_RESULT_SUCCESS` on success. One of the [Return Values](#return-values) defined above on failure
**Python Example**
```Python
-retval = dsl_infer_gie_tensor_meta_settings_get('my-pgie', True, False)
+retval, id = dsl_infer_unique_id_get('my-pgie')
+```
+
+
+
+### *dsl_infer_config_file_get*
+```C++
+DslReturnType dsl_infer_config_file_get(const wchar_t* name,
+ const wchar_t** infer_config_file);
+```
+
+This service returns the current Inference Config file in use by the named Primary or Secondary GIE or TIS.
+
+**Parameters**
+* `name` - [in] unique name of the Primary or Secondary GIE or TIS to query.
+* `infer_config_file` - [out] returns the absolute file path/name for the infer config file in use
+
+**Returns**
+`DSL_RESULT_SUCCESS` if successful. One of the [Return Values](#return-values) defined above on failure.
+
+**Python Example**
+```Python
+retval, infer_config_file = dsl_infer_config_file_get('my-sgie)
+```
+
+
+
+### *dsl_infer_config_file_set*
+```C++
+DslReturnType dsl_infer_config_file_set(const wchar_t* name,
+ const wchar_t* infer_config_file);
+```
+
+This service sets the Inference Config file for named Primary or Secondary GIE or TIS to use.
+
+**IMPORTANT!** This service can be called in any Pipeline state. [On-the-fly](https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_on_the_fly_model.html#) model updates are performed asynchronously. With Primary and Secondary GIEs, clients can register a [model-update-listener](#dsl_infer_gie_model_update_listener_cb) callback function to notified when a new model-engine is successfully loaded. See [`dsl_infer_gie_model_update_listener_add`](#dsl_infer_gie_model_update_listener_add), not applicable to TISs.
+
+**Parameters**
+* `name` - unique name of the Primary or Secondary GIE of TIS to update.
+* `infer_config_file` - [in] relative or absolute file path/name for the infer config file to load
+
+**Returns**
+`DSL_RESULT_SUCCESS` if successful. One of the [Return Values](#return-values) defined above on failure.
+
+**Python Example**
+```Python
+retval, dsl_infer_config_file_set('my-pgie', './configs/config_infer_primary_nano.txt')
```
diff --git a/docs/api-reference-list.md b/docs/api-reference-list.md
index e0945e8e..17dc01dc 100644
--- a/docs/api-reference-list.md
+++ b/docs/api-reference-list.md
@@ -30,6 +30,7 @@
* [`dsl_sink_window_key_event_handler_cb`](/docs/api-sink.md#dsl_sink_window_key_event_handler_cb)
* [`dsl_sink_window_button_event_handler_cb`](/docs/api-sink.md#dsl_sink_window_button_event_handler_cb)
* [`dsl_sink_window_delete_event_handler_cb`](/docs/api-sink.md#dsl_sink_window_delete_event_handler_cb)
+* [`dsl_infer_gie_model_update_listener_cb`](/docs/api-infer.md#dsl_infer_gie_model_update_listener_cb)
## DSL Services API:
* [`dsl_main_loop_run`](/docs/overview.md#main-loop-context)
@@ -308,13 +309,15 @@
* [`dsl_infer_gie_secondary_new`](/docs/api-infer.md#dsl_infer_gie_secondary_new)
* [`dsl_infer_tis_primary_new`](/docs/api-infer.md#dsl_infer_tis_primary_new)
* [`dsl_infer_tis_secondary_new`](/docs/api-infer.md#dsl_infer_tis_secondary_new)
-* [`dsl_infer_batch_size_get`](/docs/api-infer.md#dsl_infer_batch_size_get)
-* [`dsl_infer_batch_size_set`](/docs/api-infer.md#dsl_infer_batch_size_set)
-* [`dsl_infer_unique_id_get`](/docs/api-infer.md#dsl_infer_unique_id_get)
* [`dsl_infer_gie_model_engine_file_get`](/docs/api-infer.md#dsl_infer_gie_model_engine_file_get)
* [`dsl_infer_gie_model_engine_file_set`](/docs/api-infer.md#dsl_infer_gie_model_engine_file_set)
+* [`dsl_infer_gie_model_update_listener_add`](/docs/api-infer.md#dsl_infer_gie_model_update_listener_add)
+* [`dsl_infer_gie_model_update_listener_remove`](/docs/api-infer.md#dsl_infer_gie_model_update_listener_remove)
* [`dsl_infer_gie_tensor_meta_settings_get`](/docs/api-infer.md#dsl_infer_gie_tensor_meta_settings_get)
* [`dsl_infer_gie_tensor_meta_settings_set`](/docs/api-infer.md#dsl_infer_gie_tensor_meta_settings_set)
+* [`dsl_infer_batch_size_get`](/docs/api-infer.md#dsl_infer_batch_size_get)
+* [`dsl_infer_batch_size_set`](/docs/api-infer.md#dsl_infer_batch_size_set)
+* [`dsl_infer_unique_id_get`](/docs/api-infer.md#dsl_infer_unique_id_get)
* [`dsl_infer_config_file_get`](/docs/api-infer.md#dsl_infer_config_file_get)
* [`dsl_infer_config_file_set`](/docs/api-infer.md#dsl_infer_config_file_set)
* [`dsl_infer_interval_get`](/docs/api-infer.md#dsl_infer_interval_get)
diff --git a/docs/examples-basic-pipelines.md b/docs/examples-basic-pipelines.md
index 36f9962b..782f29e0 100644
--- a/docs/examples-basic-pipelines.md
+++ b/docs/examples-basic-pipelines.md
@@ -4,6 +4,7 @@ This page documents the following "Basic Inference Pipelines" consiting of
* [File Source, Primary GIE, IOU Tracker, OSD, EGL Window Sink, and File Sink](#file-source-primary-gie-iou-tracker-osd-egl-window-sink-and-file-sink)
* [File Source, Primary GIE, IOU Tracker, OSD, EGL Window Sink, and RTSP Sink](#file-source-primary-gie-iou-tracker-osd-egl-window-sink-and-rtsp-sink)
* [File Source, Primary GIE, IOU Tracker, OSD, EGL Window Sink, and V4L2 Sink](#file-source-primary-gie-iou-tracker-osd-egl-window-sink-and-v4l2-sink)
+* [File Source, Primary GIE, DCF Tracker, 2 Secondary GIEs, OSD, EGL Windon Sink](#file-source-primary-gie-dcf-tracker-2-secondary-gies-osd-egl-windon-sink)
* [RTSP Source, Primary GIE, IOU Tracker, OSD, EGL Window Sink](#rtsp-source-primary-gie-iou-tracker-osd-egl-window-sink)
* [HTTP Source, Primary GIE, IOU Tracker, OSD, EGL Window Sink](#http-source-primary-gie-iou-tracker-osd-egl-window-sink)
* [File Source, Preprocessor, Primary GIE, IOU Tracker, OSD, EGL Window Sink](#file-source-preprocessor-primary-gie-iou-tracker-osd-egl-window-sink)
@@ -139,6 +140,33 @@ This page documents the following "Basic Inference Pipelines" consiting of
---
+### File Source, Primary GIE, DCF Tracker, 2 Secondary GIEs, OSD, EGL Windon Sink
+* [1file_pgie_dcf_tracker_2sgie_window.py](/examples/python/1file_pgie_dcf_tracker_2sgie_window.py)
+* [1file_pgie_dcf_tracker_2sgie_window.cpp](/examples/cpp/1file_pgie_dcf_tracker_2sgie_window.cpp)
+
+```python
+#
+# The simple example demonstrates how to create a set of Pipeline components,
+# specifically:
+# - File Source
+# - Primary GST Inference Engine (PGIE)
+# - DCF Tracker
+# - 2 Secondary GST Inference Engines (SGIEs)
+# - On-Screen Display (OSD)
+# - Window Sink
+# ...and how to add them to a new Pipeline and play
+#
+# The example registers handler callback functions with the Pipeline for:
+# - key-release events
+# - delete-window events
+# - end-of-stream EOS events
+# - Pipeline change-of-state events
+#
+```
+
+
+---
+
### RTSP Source, Primary GIE, IOU Tracker, OSD, EGL Window Sink
* [`1rtsp_pgie_dcf_tracker_osd_window.py`](/examples/python/1rtsp_pgie_dcf_tracker_osd_window.py)
diff --git a/docs/examples-dynamic-pipelines.md b/docs/examples-dynamic-pipelines.md
index 01f7af13..c34acb7b 100644
--- a/docs/examples-dynamic-pipelines.md
+++ b/docs/examples-dynamic-pipelines.md
@@ -1,5 +1,6 @@
# Dynamic Pipelines
This page documents the following examples:
+* [Dynamically Update Inference Model Engine File](#dynamically-update-inference-model-engine-file)
* [Dynamically Add/Remove Sources to/from a Pipeline with a Tiler and Window Sink](#dynamically-addremove-sources-tofrom-a-pipeline-with-a-tiler-and-window-sink)
* [Dynamically Move a Branch from One Demuxer Stream to Another](#dynamically-move-a-branch-from-one-demuxer-stream-to-another)
@@ -7,6 +8,54 @@ This page documents the following examples:
---
+### Dynamically Update Inference Model Engine File
+* [dynamically_update_inference_model.py](/examples/python/dynamically_update_inference_model.py)
+* [dynamically_update_inference_model.cpp](/examples/cpp/dynamically_update_inference_model.cpp)
+
+```python
+#
+# The simple example demonstrates how to create a set of Pipeline components,
+# specifically:
+# - File Source
+# - Primary GST Inference Engine (PGIE)
+# - DCF Tracker
+# - Secondary GST Inference Engines (SGIEs)
+# - On-Screen Display (OSD)
+# - Window Sink
+# ...and how to dynamically update an Inference Engine's config and model files.
+#
+# The key-release handler function will dynamically update the Secondary
+# Inference Engine's config-file on the key value as follows.
+#
+# "1" = '../../test/config/config_infer_secondary_vehicletypes.yml'
+# "2" = '../../test/config/config_infer_secondary_vehiclemake.yml'
+#
+# The new model engine is loaded by the SGIE asynchronously. a client listener
+# (callback) function is added to the SGIE to be notified when the loading is
+# complete. See the "model_update_listener" function defined below.
+#
+# IMPORTANT! it is best to allow the config file to specify the model engine
+# file when updating both the config and model. Set the model_engine_file
+# parameter to None when creating the Inference component.
+#
+# retval = dsl_infer_gie_secondary_new(L"secondary-gie",
+# secondary_infer_config_file_1.c_str(), NULL, L"primary-gie", 0);
+#
+# The Config files used are located under /deepstream-services-library/test/config
+# The files reference models created with the file
+# /deepstream-services-library/make_trafficcamnet_engine_files.py
+#
+# The example registers handler callback functions with the Pipeline for:
+# - key-release events
+# - delete-window events
+# - end-of-stream EOS events
+# - Pipeline change-of-state events#
+#
+```
+
+
+---
+
### Dynamically Add/Remove Sources to/from a Pipeline with a Tiler and Window Sink
* [`dynamically_add_remove_sources_with_tiler_window_sink.py`](/examples/python/dynamically_add_remove_sources_with_tiler_window_sink.py)
diff --git a/dsl.py b/dsl.py
index ea35e177..4a1c8dc1 100644
--- a/dsl.py
+++ b/dsl.py
@@ -562,6 +562,10 @@ class dsl_threshold_value(Structure):
DSL_COMPONENT_QUEUE_UNDERRUN_LISTENER = \
CFUNCTYPE(None, c_wchar_p, c_void_p)
+#dsl_infer_gie_model_update_listener_cb
+DSL_INFER_GIE_MODEL_UPDATE_LISTENER = \
+ CFUNCTYPE(None, c_wchar_p, c_wchar_p, c_void_p)
+
##
## TODO: CTYPES callback management needs to be completed before any of
## the callback remove wrapper functions will work correctly.
@@ -5124,6 +5128,35 @@ def dsl_infer_raw_output_enabled_set(name, enabled, path):
result = _dsl.dsl_infer_raw_output_enabled_set(name, enabled, path)
return int(result)
+##
+## dsl_infer_gie_model_update_listener_add()
+##
+_dsl.dsl_infer_gie_model_update_listener_add.argtypes = [c_wchar_p,
+ DSL_INFER_GIE_MODEL_UPDATE_LISTENER, c_void_p]
+_dsl.dsl_infer_gie_model_update_listener_add.restype = c_uint
+def dsl_infer_gie_model_update_listener_add(name, listener, client_data):
+ global _dsl
+ c_listener = DSL_INFER_GIE_MODEL_UPDATE_LISTENER(listener)
+ callbacks.append(c_listener)
+ c_client_data=cast(pointer(py_object(client_data)), c_void_p)
+ clientdata.append(c_client_data)
+ result = _dsl.dsl_infer_gie_model_update_listener_add(name,
+ c_listener, c_client_data)
+ return int(result)
+
+##
+## dsl_infer_gie_model_update_listener_remove()
+##
+_dsl.dsl_infer_gie_model_update_listener_remove.argtypes = [c_wchar_p,
+ DSL_INFER_GIE_MODEL_UPDATE_LISTENER]
+_dsl.dsl_infer_gie_model_update_listener_remove.restype = c_uint
+def dsl_infer_gie_model_update_listener_remove(name, listener):
+ global _dsl
+ c_listener = DSL_INFER_GIE_MODEL_UPDATE_LISTENER(listener)
+ result = _dsl.dsl_infer_gie_model_update_listener_remove(name, c_listener)
+ return int(result)
+
+
##
## dsl_tracker_new()
##
diff --git a/examples/cpp/1file_pgie_dcf_tracker_2sgie_window.cpp b/examples/cpp/1file_pgie_dcf_tracker_2sgie_window.cpp
new file mode 100644
index 00000000..c508ebc4
--- /dev/null
+++ b/examples/cpp/1file_pgie_dcf_tracker_2sgie_window.cpp
@@ -0,0 +1,220 @@
+/*
+The MIT License
+
+Copyright (c) 2024, Prominence AI, Inc.
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in-
+all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+THE SOFTWARE.
+*/
+
+/*################################################################################
+#
+# The simple example demonstrates how to create a set of Pipeline components,
+# specifically:
+# - File Source
+# - Primary GST Inference Engine (PGIE)
+# - DCF Tracker
+# - 2 Secondary GST Inference Engines (SGIEs)
+# - On-Screen Display (OSD)
+# - Window Sink
+# ...and how to add them to a new Pipeline and play
+#
+# The example registers handler callback functions with the Pipeline for:
+# - key-release events
+# - delete-window events
+# - end-of-stream EOS events
+# - Pipeline change-of-state events
+#
+##############################################################################*/
+
+#include
+#include
+#include
+#include
+#include
+#include "DslApi.h"
+
+// URI for the File Source
+std::wstring uri_h265(
+ L"/opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h265.mp4");
+
+// Config and model-engine files
+std::wstring primary_infer_config_file(
+ L"/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-preprocess-test/config_infer.txt");
+std::wstring primary_model_engine_file(
+ L"/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b8_gpu0_int8.engine");
+
+std::wstring secondary_infer_config_file_1(
+ L"/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_secondary_vehicletypes.txt");
+std::wstring secondary_model_engine_file_1(
+ L"/opt/nvidia/deepstream/deepstream/samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet.etlt_b8_gpu0_int8.engine");
+
+std::wstring secondary_infer_config_file_2(
+ L"/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_secondary_vehiclemake.txt");
+
+std::wstring secondary_model_engine_file_2(
+ L"/opt/nvidia/deepstream/deepstream/samples/models/Secondary_VehicleMake/resnet18_vehiclemakenet.etlt_b8_gpu0_int8.engine");
+
+// Filespec for the NvDCF Tracker config file
+std::wstring dcf_tracker_config_file(
+ L"/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_NvDCF_max_perf.yml");
+
+// EGL Window Sink Dimensions
+uint WINDOW_WIDTH = DSL_1K_HD_WIDTH / 2;
+uint WINDOW_HEIGHT = DSL_1K_HD_HEIGHT / 2;
+
+//
+// Function to be called on XWindow KeyRelease event
+//
+void xwindow_key_event_handler(const wchar_t* in_key, void* client_data)
+{
+ std::wstring wkey(in_key);
+ std::string key(wkey.begin(), wkey.end());
+ std::cout << "key released = " << key << std::endl;
+
+ key = std::toupper(key[0]);
+ if(key == "P"){
+ dsl_pipeline_pause(L"pipeline");
+ } else if (key == "R"){
+ dsl_pipeline_play(L"pipeline");
+ } else if (key == "Q" or key == "" or key == ""){
+ std::cout << "Main Loop Quit" << std::endl;
+ dsl_pipeline_stop(L"pipeline");
+ dsl_main_loop_quit();
+ }
+}
+
+//
+// Function to be called on XWindow Delete event
+//
+void xwindow_delete_event_handler(void* client_data)
+{
+ std::cout<<"delete window event"<
+#include
+#include
+#include
+#include
+#include "DslApi.h"
+
+// URI for the File Source
+std::wstring uri_h265(
+ L"/opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h265.mp4");
+
+// Config and model-engine files
+std::wstring primary_infer_config_file(
+ L"/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-preprocess-test/config_infer.txt");
+std::wstring primary_model_engine_file(
+ L"/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b8_gpu0_int8.engine");
+
+// Secondary Inference Engine config files.
+std::wstring secondary_infer_config_file_1(
+ L"../../test/config/config_infer_secondary_vehicletypes.yml");
+std::wstring secondary_infer_config_file_2(
+ L"../../test/config/config_infer_secondary_vehiclemake.yml");
+
+// flag to indicate if a model engine file update is in progress.
+bool model_updating = false;
+
+// Filespec for the NvDCF Tracker config file
+std::wstring dcf_tracker_config_file(
+ L"/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_NvDCF_max_perf.yml");
+
+// EGL Window Sink Dimensions
+uint WINDOW_WIDTH = DSL_1K_HD_WIDTH / 2;
+uint WINDOW_HEIGHT = DSL_1K_HD_HEIGHT / 2;
+
+//
+// Function to be called on XWindow KeyRelease event
+//
+void xwindow_key_event_handler(const wchar_t* in_key, void* client_data)
+{
+ std::wstring wkey(in_key);
+ std::string key(wkey.begin(), wkey.end());
+ std::cout << "key released = " << key << std::endl;
+
+ key = std::toupper(key[0]);
+
+ if (key == "1" and model_updating == false)
+ {
+ model_updating = true;
+ std::wcout << L"Result of start engine update = "
+ << dsl_return_value_to_string(
+ dsl_infer_config_file_set(L"secondary-gie",
+ secondary_infer_config_file_1.c_str())) << std::endl;
+ }
+ else if (key == "2" and model_updating == false)
+ {
+ model_updating = true;
+ std::wcout << L"Result of start engine update = "
+ << dsl_return_value_to_string(
+ dsl_infer_config_file_set(L"secondary-gie",
+ secondary_infer_config_file_2.c_str())) << std::endl;
+ }
+ else if(key == "P")
+ {
+ dsl_pipeline_pause(L"pipeline");
+ }
+ else if (key == "R")
+ {
+ dsl_pipeline_play(L"pipeline");
+ }
+ else if (key == "Q" or key == "" or key == "")
+ {
+ std::cout << "Main Loop Quit" << std::endl;
+ dsl_pipeline_stop(L"pipeline");
+ dsl_main_loop_quit();
+ }
+}
+
+//
+// Function to be called when a model update has been completed
+//
+void model_update_listener(const wchar_t* name,
+ const wchar_t* model_engine_file, void* client_data)
+{
+ std::wcout << name << " completed loading model "
+ << model_engine_file << std::endl;
+
+ model_updating = false;
+}
+
+//
+// Function to be called on XWindow Delete event
+//
+void xwindow_delete_event_handler(void* client_data)
+{
+ std::cout<<"delete window event"<
+ InferGieModelUpdateListenerAdd(cstrName.c_str(), listener, client_data);
+}
+
+DslReturnType dsl_infer_gie_model_update_listener_remove(const wchar_t* name,
+ dsl_infer_gie_model_update_listener_cb listener)
+{
+ RETURN_IF_PARAM_IS_NULL(name);
+ RETURN_IF_PARAM_IS_NULL(listener);
+
+ std::wstring wstrName(name);
+ std::string cstrName(wstrName.begin(), wstrName.end());
+
+ return DSL::Services::GetServices()->
+ InferGieModelUpdateListenerRemove(cstrName.c_str(), listener);
+}
+
DslReturnType dsl_tracker_new(const wchar_t* name,
const wchar_t* config_file, uint width, uint height)
{
diff --git a/src/DslApi.h b/src/DslApi.h
index 8c4b3a37..606abdef 100644
--- a/src/DslApi.h
+++ b/src/DslApi.h
@@ -211,6 +211,8 @@ THE SOFTWARE.
#define DSL_RESULT_INFER_COMPONENT_IS_NOT_INFER 0x0006000C
#define DSL_RESULT_INFER_OUTPUT_DIR_DOES_NOT_EXIST 0x0006000D
#define DSL_RESULT_INFER_ID_NOT_FOUND 0x0006000E
+#define DSL_RESULT_INFER_CALLBACK_ADD_FAILED 0x0006000F
+#define DSL_RESULT_INFER_CALLBACK_REMOVE_FAILED 0x00060010
/**
* Demuxer API Return Values
@@ -1932,6 +1934,16 @@ typedef void (*dsl_component_queue_overrun_listener_cb)(const wchar_t* name,
typedef void (*dsl_component_queue_underrun_listener_cb)(const wchar_t* name,
void* client_data);
+/**
+ * @brief Callback typedef for Primary or Secondary GIE to notify clients when a
+ * model engine has been successfully updated.
+ * @param name name of the Primary or Secondary GIE calling this function.
+ * @param model_engine_file path to the new model engine file in use.
+ * @param[in] client_data opaque pointer to client's user data.
+ */
+typedef void (*dsl_infer_gie_model_update_listener_cb)(const wchar_t* name,
+ const wchar_t* model_engine_file, void* client_data);
+
// -----------------------------------------------------------------------------------
// Start of DSL Services
@@ -6363,8 +6375,8 @@ DslReturnType dsl_infer_pph_remove(const wchar_t* name,
const wchar_t* handler, uint pad);
/**
- * @brief Gets the current Infer Config File in use by the named Primary or Secondary GIE
- * @param[in] name unique name of Primary or Secondary GIE to query
+ * @brief Gets the current Infer Config File in use by the named Inference Component
+ * @param[in] name unique name of Inference Component to query
* @param[out] infer_config_file Infer Config file currently in use
* @return DSL_RESULT_SUCCESS on success, DSL_RESULT_INFER_RESULT otherwise.
*/
@@ -6372,8 +6384,8 @@ DslReturnType dsl_infer_config_file_get(const wchar_t* name,
const wchar_t** infer_config_file);
/**
- * @brief Sets the Infer Config File to use by the named Primary or Secondary GIE
- * @param[in] name unique name of Primary or Secondary GIE to update
+ * @brief Sets the Infer Config File to use by the named Inference Component
+ * @param[in] name unique name of Inference Component to update
* @param[in] infer_config_file new Infer Config file to use
* @return DSL_RESULT_SUCCESS on success, DSL_RESULT_INFER_RESULT otherwise.
*/
@@ -6425,16 +6437,16 @@ DslReturnType dsl_infer_gie_tensor_meta_settings_set(const wchar_t* name,
boolean input_enabled, boolean output_enabled);
/**
- * @brief Gets the current Infer Interval in use by the named Primary or Secondary GIE
- * @param[in] name of Primary or Secondary GIE to query
+ * @brief Gets the current Infer Interval in use by the named Inference Component
+ * @param[in] name of Inference Component to query
* @param[out] interval Infer interval value currently in use
* @return DSL_RESULT_SUCCESS on success, DSL_RESULT_INFER_RESULT otherwise.
*/
DslReturnType dsl_infer_interval_get(const wchar_t* name, uint* interval);
/**
- * @brief Sets the Model Engine File to use by the named Primary or Secondary GIE
- * @param[in] name of Primary or Secondary GIE to update
+ * @brief Sets the Model Engine File to use by the named Inference Component
+ * @param[in] name of Inference Component to update
* @param[in] interval new Infer Interval value to use
* @return DSL_RESULT_SUCCESS on success, DSL_RESULT_INFER_RESULT otherwise.
*/
@@ -6442,7 +6454,7 @@ DslReturnType dsl_infer_interval_set(const wchar_t* name, uint interval);
/**
* @brief Enbles/disables the raw layer-info output to binary file for the named the GIE
- * @param[in] name name of the Primary or Secondary GIE to update
+ * @param[in] name name of the Inference Component to update
* @param[in] enabled set to true to enable frame-to-file output for each GIE layer
* @param[in] path absolute or relative direcory path to write to.
* @return DSL_RESULT_SUCCESS on success, DSL_RESULT_INFER_RESULT otherwise.
@@ -6450,6 +6462,26 @@ DslReturnType dsl_infer_interval_set(const wchar_t* name, uint interval);
DslReturnType dsl_infer_raw_output_enabled_set(const wchar_t* name,
boolean enabled, const wchar_t* path);
+/**
+ * @brief Adds a model update listener callback to a named Primary or Secondary GIE.
+ * @param name name of the Primary or Secondary GIE to update.
+ * @param listener callback function to add.
+ * @param client_data opaque pointer to client data passed to the listener function.
+ * @return DSL_RESULT_SUCCESS on success, DSL_RESULT_INFER_RESULT otherwise.
+ */
+DslReturnType dsl_infer_gie_model_update_listener_add(const wchar_t* name,
+ dsl_infer_gie_model_update_listener_cb listener, void* client_data);
+
+/**
+ * @brief Removes a model update listener callback to a named Primary or Secondary GIE.
+ * @param name name of the Primary or Secondary GIE to update.
+ * @param listener callback function to add.
+ * @param client_data opaque pointer to client data passed to the listener function.
+ * @return DSL_RESULT_SUCCESS on success, DSL_RESULT_INFER_RESULT otherwise.
+ */
+DslReturnType dsl_infer_gie_model_update_listener_remove(const wchar_t* name,
+ dsl_infer_gie_model_update_listener_cb listener);
+
/**
* @brief creates a new, uniquely named Multi-Object Tracker (MOT) object. The
* type of tracker is specifed by the configuration file used.
diff --git a/src/DslBase.h b/src/DslBase.h
index 1fe1b06a..bc97c7aa 100644
--- a/src/DslBase.h
+++ b/src/DslBase.h
@@ -86,6 +86,12 @@ namespace DSL
return m_name;
}
+ std::wstring GetWStrName()
+ {
+ std::wstring wStrName(m_name.begin(), m_name.end());
+ return wStrName;
+ }
+
/**
* @brief updates the current name by appending a suffix
* @return const std::string name given to this Event
diff --git a/src/DslInferBintr.cpp b/src/DslInferBintr.cpp
index 8e4c1207..79f73d1c 100644
--- a/src/DslInferBintr.cpp
+++ b/src/DslInferBintr.cpp
@@ -90,6 +90,9 @@ namespace DSL
{
m_pInferEngine->SetAttribute("model-engine-file", modelEngineFile);
}
+ // connect the callback to the GIE element's model-updated signal
+ g_signal_connect(m_pInferEngine->GetGObject(), "model-updated",
+ G_CALLBACK(OnModelUpdatedCB), this);
}
m_pInferEngine->SetAttribute("config-file-path", inferConfigFile);
@@ -117,7 +120,7 @@ namespace DSL
// update the InferEngine interval setting
SetInterval(m_interval);
-
+
// g_object_set (m_pInferEngine->GetGstObject(),
// "raw-output-generated-callback", OnRawOutputGeneratedCB,
// "raw-output-generated-userdata", this,
@@ -160,13 +163,7 @@ namespace DSL
LOG_ERROR("Infer Config File '" << inferConfigFile << "' Not found");
return false;
}
-
- if (IsInUse())
- {
- LOG_ERROR("Unable to set Infer Config File for InferBintr '" << GetName()
- << "' as it's currently in use");
- return false;
- }
+ // can be updated in any state
m_inferConfigFile.assign(inferConfigFile);
m_pInferEngine->SetAttribute("config-file-path", inferConfigFile);
@@ -388,6 +385,70 @@ namespace DSL
m_rawOutputFrameNumber++;
}
+ bool InferBintr::AddModelUpdateListener(
+ dsl_infer_gie_model_update_listener_cb listener, void* clientData)
+ {
+ LOG_FUNC();
+
+ if (m_modelUpdateListeners.find(listener) != m_modelUpdateListeners.end())
+ {
+ LOG_ERROR("Model Update listener is not unique");
+ return false;
+ }
+ m_modelUpdateListeners[listener] = clientData;
+
+ return true;
+ }
+
+ bool InferBintr::RemoveModelUpdateListener(
+ dsl_infer_gie_model_update_listener_cb listener)
+ {
+ LOG_FUNC();
+
+ if (m_modelUpdateListeners.find(listener) == m_modelUpdateListeners.end())
+ {
+ LOG_ERROR("Pipeline listener was not found");
+ return false;
+ }
+ m_modelUpdateListeners.erase(listener);
+
+ return true;
+ }
+
+
+ void InferBintr::HandleOnModelUpdatedCB(gchararray modelEngineFile)
+ {
+ LOG_FUNC();
+
+ LOG_INFO("Model update complete for InferBintr '"
+ << GetName() << "'");
+ LOG_INFO("New model = '" << modelEngineFile);
+
+ if (m_modelUpdateListeners.size())
+ {
+ // Need the wstring version of the file path to send to the client.
+ std::string cModelEngineFile(modelEngineFile);
+ std::wstring wModeEngineFile(cModelEngineFile.begin(),
+ cModelEngineFile.end());
+
+ // iterate through the map of listeners calling each
+ for(auto const& imap: m_modelUpdateListeners)
+ {
+ try
+ {
+ imap.first(GetWStrName().c_str(),
+ wModeEngineFile.c_str(), imap.second);
+ }
+ catch(...)
+ {
+ LOG_ERROR("Exception calling Client Model-Update-Lister");
+ }
+ }
+
+ }
+
+ }
+
static void OnRawOutputGeneratedCB(GstBuffer* pBuffer, NvDsInferNetworkInfo* pNetworkInfo,
NvDsInferLayerInfo *pLayersInfo, guint layersCount, guint batchSize, gpointer pGie)
{
@@ -395,6 +456,14 @@ namespace DSL
pLayersInfo, layersCount, batchSize);
}
+
+ static void OnModelUpdatedCB(GstElement* object, gint arg0, gchararray arg1,
+ gpointer pInferBintr)
+ {
+ static_cast(pInferBintr)->HandleOnModelUpdatedCB(
+ arg1);
+ }
+
// ***********************************************************************
PrimaryInferBintr::PrimaryInferBintr(const char* name, const char* inferConfigFile,
@@ -776,6 +845,6 @@ namespace DSL
{
LOG_FUNC();
}
-
+
}
diff --git a/src/DslInferBintr.h b/src/DslInferBintr.h
index da34044d..c57cdb93 100644
--- a/src/DslInferBintr.h
+++ b/src/DslInferBintr.h
@@ -207,8 +207,30 @@ namespace DSL
* @param batchSize batch-size set to number of sources
*/
void HandleOnRawOutputGeneratedCB(GstBuffer* pBuffer, NvDsInferNetworkInfo* pNetworkInfo,
- NvDsInferLayerInfo *pLayersInfo, guint layersCount, guint batchSize);
+ NvDsInferLayerInfo *pLayersInfo, guint layersCount, guint batchSize);
+ /**
+ * @brief Adds a Model Update Listener to this InferenceBintr
+ * @param listener client listener function to add
+ * @param clientData opaque pointer to client data to return on callback
+ * @return true on successful addition, false otherwise.
+ */
+ bool AddModelUpdateListener(dsl_infer_gie_model_update_listener_cb listener,
+ void* clientData);
+
+ /**
+ * @brief Removes a Model Update Listener frome this InferenceBintr.
+ * @param listener client listener function to remove.
+ * @return true on successful removal, false otherwise.
+ */
+ bool RemoveModelUpdateListener(dsl_infer_gie_model_update_listener_cb listener);
+
+ /**
+ * @brief Handles the model-updated signal.
+ * @param modelEngineFile path to the new model engine file used.
+ */
+ void HandleOnModelUpdatedCB(gchararray modelEngineFile);
+
/**
* @brief static list of unique Infer plugin IDs to be used/recycled by all
* InferBintrs ctor/dtor
@@ -261,6 +283,11 @@ namespace DSL
* @brief maintains the current frame number between callbacks
*/
ulong m_rawOutputFrameNumber;
+
+ /**
+ * @brief map of all client model update listeners.
+ */
+ std::map m_modelUpdateListeners;
/**
* @brief current input-temsor-meta enabled setting for this InferBintr.
@@ -284,6 +311,9 @@ namespace DSL
static void OnRawOutputGeneratedCB(GstBuffer* pBuffer, NvDsInferNetworkInfo* pNetworkInfo,
NvDsInferLayerInfo *pLayersInfo, guint layersCount, guint batchSize, gpointer pGie);
+ static void OnModelUpdatedCB(GstElement* object, gint arg0, gchararray arg1,
+ gpointer pInferBintr);
+
/**
* @class PrimaryInferBintr
* @brief Implements a container for a Primary GIE or TIS
@@ -645,7 +675,8 @@ namespace DSL
* @brief dtor for the SecondaryTisBintr
*/
~SecondaryTisBintr();
- };
+ };
+
}
#endif // _DSL_GIE_BINTR_H
\ No newline at end of file
diff --git a/src/DslOdeAction.cpp b/src/DslOdeAction.cpp
index 3c0cdd40..4ea5b64f 100644
--- a/src/DslOdeAction.cpp
+++ b/src/DslOdeAction.cpp
@@ -2074,6 +2074,7 @@ namespace DSL
pDstMeta = (NvDsEventMsgMeta*)g_memdup(pSrcMeta, sizeof(NvDsEventMsgMeta));
+ pDstMeta->extMsg = g_strdup((const gchar*)pSrcMeta->extMsg);
pDstMeta->ts = g_strdup(pSrcMeta->ts);
pDstMeta->sensorStr = g_strdup(pSrcMeta->sensorStr);
pDstMeta->objectId = g_strdup(pSrcMeta->objectId);
@@ -2087,6 +2088,7 @@ namespace DSL
NvDsUserMeta *pUserMeta = (NvDsUserMeta *) data;
NvDsEventMsgMeta *pSrcMeta = (NvDsEventMsgMeta *) pUserMeta->user_meta_data;
+ g_free(pSrcMeta->extMsg);
g_free(pSrcMeta->ts);
g_free(pSrcMeta->sensorStr);
g_free(pSrcMeta->objectId);
@@ -2118,7 +2120,12 @@ namespace DSL
{
NvDsEventMsgMeta* pMsgMeta =
(NvDsEventMsgMeta*)g_malloc0(sizeof(NvDsEventMsgMeta));
-
+
+ DSL_ODE_TRIGGER_PTR pTrigger =
+ std::dynamic_pointer_cast(pOdeTrigger);
+ pMsgMeta->extMsg = g_strdup(pTrigger->GetName().c_str());
+ pMsgMeta->extMsgSize = strlen((char*)pMsgMeta->extMsg) + 1;
+
pMsgMeta->sensorId = pFrameMeta->source_id;
const char* sourceName;
Services::GetServices()->SourceNameGet(pFrameMeta->source_id,
diff --git a/src/DslServices.cpp b/src/DslServices.cpp
index 54cfaebe..5aaa0497 100644
--- a/src/DslServices.cpp
+++ b/src/DslServices.cpp
@@ -503,6 +503,8 @@ namespace DSL
m_returnValueToString[DSL_RESULT_INFER_COMPONENT_IS_NOT_INFER] = L"DSL_RESULT_INFER_COMPONENT_IS_NOT_INFER";
m_returnValueToString[DSL_RESULT_INFER_OUTPUT_DIR_DOES_NOT_EXIST] = L"DSL_RESULT_INFER_OUTPUT_DIR_DOES_NOT_EXIST";
m_returnValueToString[DSL_RESULT_INFER_ID_NOT_FOUND] = L"DSL_RESULT_INFER_ID_NOT_FOUND";
+ m_returnValueToString[DSL_RESULT_INFER_CALLBACK_ADD_FAILED] = L"DSL_RESULT_INFER_CALLBACK_ADD_FAILED";
+ m_returnValueToString[DSL_RESULT_INFER_CALLBACK_REMOVE_FAILED] = L"DSL_RESULT_INFER_CALLBACK_REMOVE_FAILED";
m_returnValueToString[DSL_RESULT_SEGVISUAL_NAME_NOT_UNIQUE] = L"DSL_RESULT_SEGVISUAL_NAME_NOT_UNIQUE";
m_returnValueToString[DSL_RESULT_SEGVISUAL_NAME_NOT_FOUND] = L"DSL_RESULT_SEGVISUAL_NAME_NOT_FOUND";
diff --git a/src/DslServices.h b/src/DslServices.h
index 957277e1..20fc12f1 100644
--- a/src/DslServices.h
+++ b/src/DslServices.h
@@ -1160,6 +1160,12 @@ namespace DSL {
DslReturnType InferRawOutputEnabledSet(const char* name, boolean enabled,
const char* path);
+
+ DslReturnType InferGieModelUpdateListenerAdd(const char* name,
+ dsl_infer_gie_model_update_listener_cb listener, void* clientData);
+
+ DslReturnType InferGieModelUpdateListenerRemove(const char* name,
+ dsl_infer_gie_model_update_listener_cb listener);
DslReturnType InferGieTensorMetaSettingsGet(const char* name,
boolean* inputEnabled, boolean* outputEnabled);
diff --git a/src/DslServicesInfer.cpp b/src/DslServicesInfer.cpp
index 471838c2..d0a1b3e4 100644
--- a/src/DslServicesInfer.cpp
+++ b/src/DslServicesInfer.cpp
@@ -402,6 +402,72 @@ namespace DSL
}
}
+ DslReturnType Services::InferGieModelUpdateListenerAdd(const char* name,
+ dsl_infer_gie_model_update_listener_cb listener, void* clientData)
+ {
+ LOG_FUNC();
+ LOCK_MUTEX_FOR_CURRENT_SCOPE(&m_servicesMutex);
+
+ try
+ {
+ DSL_RETURN_IF_COMPONENT_NAME_NOT_FOUND(m_components, name);
+ DSL_RETURN_IF_COMPONENT_IS_NOT_GIE(m_components, name);
+
+ DSL_INFER_PTR pInferBintr =
+ std::dynamic_pointer_cast(m_components[name]);
+
+ if (!pInferBintr->AddModelUpdateListener(listener, clientData))
+ {
+ LOG_ERROR("Inference Component '" << name
+ << "' failed to add a Model Update Listener");
+ return DSL_RESULT_INFER_CALLBACK_ADD_FAILED;
+ }
+ LOG_INFO("Inference Component '" << name
+ << "' added a Model Update Listener successfully");
+
+ return DSL_RESULT_SUCCESS;
+ }
+ catch(...)
+ {
+ LOG_ERROR("Inference Component '" << name
+ << "' threw an exception adding a Model Update Listener");
+ return DSL_RESULT_SOURCE_THREW_EXCEPTION;
+ }
+ }
+
+ DslReturnType Services::InferGieModelUpdateListenerRemove(const char* name,
+ dsl_infer_gie_model_update_listener_cb listener)
+ {
+ LOG_FUNC();
+
+ try
+ {
+ DSL_RETURN_IF_COMPONENT_NAME_NOT_FOUND(m_components, name);
+ DSL_RETURN_IF_COMPONENT_IS_NOT_GIE(m_components, name);
+
+ DSL_INFER_PTR pInferBintr =
+ std::dynamic_pointer_cast(m_components[name]);
+
+ if (!pInferBintr->RemoveModelUpdateListener(listener))
+ {
+ LOG_ERROR("Inference Component '" << name
+ << "' failed to remove a Model Update Listener");
+ return DSL_RESULT_SOURCE_CALLBACK_REMOVE_FAILED;
+ }
+ LOG_INFO("Inference Component '" << name
+ << "' removed Model Update Listener successfully");
+
+ return DSL_RESULT_SUCCESS;
+ }
+ catch(...)
+ {
+ LOG_ERROR("Inference Component '" << name
+ << "' threw an exception removeing a Model Udate Lister");
+ return DSL_RESULT_SOURCE_THREW_EXCEPTION;
+ }
+ }
+
+
DslReturnType Services::InferConfigFileGet(const char* name,
const char** inferConfigFile)
{
diff --git a/test/api/DslInferApiTest.cpp b/test/api/DslInferApiTest.cpp
index 15fc1bae..b832d394 100644
--- a/test/api/DslInferApiTest.cpp
+++ b/test/api/DslInferApiTest.cpp
@@ -556,6 +556,48 @@ SCENARIO( "A Primary GIE returns its unique id correctly", "[infer-api]" )
}
}
+static void model_update_listener_cb(const wchar_t* name,
+ const wchar_t* model_engine_file, void* client_data)
+{
+}
+
+SCENARIO( "A model-update-listener can be added and removed", "[infer-api]" )
+{
+ std::wstring pipelineName = L"test-pipeline";
+
+ GIVEN( "A PGIE in memory" )
+ {
+ REQUIRE( dsl_infer_gie_primary_new(primary_gie_name.c_str(),
+ infer_config_file.c_str(), NULL, interval) == DSL_RESULT_SUCCESS );
+
+ WHEN( "A model-update-listener is added" )
+ {
+ REQUIRE( dsl_infer_gie_model_update_listener_add(primary_gie_name.c_str(),
+ model_update_listener_cb, (void*)0x12345678) == DSL_RESULT_SUCCESS );
+
+ // second call must fail
+ REQUIRE( dsl_infer_gie_model_update_listener_add(primary_gie_name.c_str(),
+ model_update_listener_cb, (void*)0x12345678) ==
+ DSL_RESULT_INFER_CALLBACK_ADD_FAILED );
+
+ THEN( "The same listener can be removed again" )
+ {
+ REQUIRE( dsl_infer_gie_model_update_listener_remove(
+ primary_gie_name.c_str(), model_update_listener_cb) ==
+ DSL_RESULT_SUCCESS );
+
+ // second call must fail
+ REQUIRE( dsl_infer_gie_model_update_listener_remove(
+ primary_gie_name.c_str(),model_update_listener_cb) ==
+ DSL_RESULT_SOURCE_CALLBACK_REMOVE_FAILED );
+
+ REQUIRE( dsl_component_delete_all() == DSL_RESULT_SUCCESS );
+ }
+ }
+ }
+}
+
+
SCENARIO( "The GIE API checks for NULL input parameters", "[infer-api]" )
{
GIVEN( "An empty list of Components" )
@@ -635,6 +677,16 @@ SCENARIO( "The GIE API checks for NULL input parameters", "[infer-api]" )
REQUIRE( dsl_infer_unique_id_get(NULL, &retId) ==
DSL_RESULT_INVALID_INPUT_PARAM );
+ REQUIRE( dsl_infer_gie_model_update_listener_add(NULL, NULL, NULL) ==
+ DSL_RESULT_INVALID_INPUT_PARAM );
+ REQUIRE( dsl_infer_gie_model_update_listener_add(primary_gie_name.c_str(),
+ NULL, NULL) == DSL_RESULT_INVALID_INPUT_PARAM );
+ REQUIRE( dsl_infer_gie_model_update_listener_remove(NULL, NULL) ==
+ DSL_RESULT_INVALID_INPUT_PARAM );
+ REQUIRE( dsl_infer_gie_model_update_listener_remove(primary_gie_name.c_str(),
+ NULL) == DSL_RESULT_INVALID_INPUT_PARAM );
+
+
REQUIRE( dsl_component_list_size() == 0 );
}
}
diff --git a/test/config/config_infer_secondary_vehiclemake.yml b/test/config/config_infer_secondary_vehiclemake.yml
new file mode 100644
index 00000000..f83600bc
--- /dev/null
+++ b/test/config/config_infer_secondary_vehiclemake.yml
@@ -0,0 +1,76 @@
+####################################################################################################
+# SPDX-FileCopyrightText: Copyright (c) 2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: LicenseRef-NvidiaProprietary
+#
+# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
+# property and proprietary rights in and to this material, related
+# documentation and any modifications thereto. Any use, reproduction,
+# disclosure or distribution of this material and related documentation
+# without an express license agreement from NVIDIA CORPORATION or
+# its affiliates is strictly prohibited.
+####################################################################################################
+
+# Following properties are mandatory when engine files are not specified:
+# int8-calib-file(Only in INT8)
+# Caffemodel mandatory properties: model-file, proto-file, output-blob-names
+# UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names
+# ONNX: onnx-file
+#
+# Mandatory properties for detectors:
+# num-detected-classes
+#
+# Optional properties for detectors:
+# cluster-mode(Default=Group Rectangles), interval(Primary mode only, Default=0)
+# custom-lib-path,
+# parse-bbox-func-name
+#
+# Mandatory properties for classifiers:
+# classifier-threshold, is-classifier, classifier-type
+#
+# Optional properties for classifiers:
+# classifier-async-mode(Secondary mode only, Default=false)
+#
+# Optional properties in secondary mode:
+# operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),
+# input-object-min-width, input-object-min-height, input-object-max-width,
+# input-object-max-height
+#
+# Following properties are always recommended:
+# batch-size(Default=1)
+#
+# Other optional properties:
+# net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),
+# model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,
+# mean-file, gie-unique-id(Default=0), offsets, process-mode (Default=1 i.e. primary),
+# custom-lib-path, network-mode(Default=0 i.e FP32)
+#
+# The values in the config file are overridden by values set through GObject
+# properties.
+
+property:
+ gpu-id: 0
+ net-scale-factor: 1
+ tlt-model-key: tlt_encode
+ tlt-encoded-model: /opt/nvidia/deepstream/deepstream/samples/models/Secondary_VehicleMake/resnet18_vehiclemakenet.etlt
+ model-engine-file: /opt/nvidia/deepstream/deepstream/samples/models/Secondary_VehicleMake/resnet18_vehiclemakenet.etlt_b8_gpu0_int8.engine
+ int8-calib-file: /opt/nvidia/deepstream/deepstream/samples/models/Secondary_VehicleMake/cal_trt.bin
+ labelfile-path: /opt/nvidia/deepstream/deepstream/samples/models/Secondary_VehicleMake/labels.txt
+ force-implicit-batch-dim: 1
+ batch-size: 8
+ model-color-format: 1
+ ## 0=FP32, 1=INT8, 2=FP16 mode
+ network-mode: 1
+ process-mode: 2
+ network-type: 1
+ uff-input-blob-name: input_1
+ output-blob-names: predictions/Softmax
+ classifier-async-mode: 1
+ classifier-threshold: 0.51
+ input-object-min-width: 128
+ input-object-min-height: 128
+ operate-on-gie-id: 1
+ operate-on-class-ids: 0
+ classifier-type: vehiclemake
+ #scaling-filter: 0
+ #scaling-compute-hw: 0
+ infer-dims: 3;224;224
diff --git a/test/config/config_infer_secondary_vehicletypes.yml b/test/config/config_infer_secondary_vehicletypes.yml
new file mode 100644
index 00000000..5bc10848
--- /dev/null
+++ b/test/config/config_infer_secondary_vehicletypes.yml
@@ -0,0 +1,77 @@
+####################################################################################################
+# SPDX-FileCopyrightText: Copyright (c) 2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: LicenseRef-NvidiaProprietary
+#
+# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
+# property and proprietary rights in and to this material, related
+# documentation and any modifications thereto. Any use, reproduction,
+# disclosure or distribution of this material and related documentation
+# without an express license agreement from NVIDIA CORPORATION or
+# its affiliates is strictly prohibited.
+####################################################################################################
+
+
+# Following properties are mandatory when engine files are not specified:
+# int8-calib-file(Only in INT8)
+# Caffemodel mandatory properties: model-file, proto-file, output-blob-names
+# UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names
+# ONNX: onnx-file
+#
+# Mandatory properties for detectors:
+# num-detected-classes
+#
+# Optional properties for detectors:
+# cluster-mode(Default=Group Rectangles), interval(Primary mode only, Default=0)
+# custom-lib-path,
+# parse-bbox-func-name
+#
+# Mandatory properties for classifiers:
+# classifier-threshold, is-classifier, classifier-type
+#
+# Optional properties for classifiers:
+# classifier-async-mode(Secondary mode only, Default=false)
+#
+# Optional properties in secondary mode:
+# operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),
+# input-object-min-width, input-object-min-height, input-object-max-width,
+# input-object-max-height
+#
+# Following properties are always recommended:
+# batch-size(Default=1)
+#
+# Other optional properties:
+# net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),
+# model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,
+# mean-file, gie-unique-id(Default=0), offsets, process-mode (Default=1 i.e. primary),
+# custom-lib-path, network-mode(Default=0 i.e FP32)
+#
+# The values in the config file are overridden by values set through GObject
+# properties.
+
+property:
+ gpu-id: 0
+ net-scale-factor: 1
+ tlt-model-key: tlt_encode
+ tlt-encoded-model: /opt/nvidia/deepstream/deepstream/samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet.etlt
+ model-engine-file: /opt/nvidia/deepstream/deepstream/samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet.etlt_b8_gpu0_int8.engine
+ int8-calib-file: /opt/nvidia/deepstream/deepstream/samples/models/Secondary_VehicleTypes/cal_trt.bin
+ labelfile-path: /opt/nvidia/deepstream/deepstream/samples/models/Secondary_VehicleTypes/labels.txt
+ force-implicit-batch-dim: 1
+ batch-size: 8
+ model-color-format: 1
+ ## 0=FP32, 1=INT8, 2=FP16 mode
+ network-mode: 1
+ network-type: 1
+ process-mode: 2
+ uff-input-blob-name: input_1
+ output-blob-names: predictions/Softmax
+ classifier-async-mode: 1
+ classifier-threshold: 0.51
+ input-object-min-width: 128
+ input-object-min-height: 128
+ operate-on-gie-id: 1
+ operate-on-class-ids: 0
+ classifier-type: vehicletype
+ #scaling-filter=0
+ #scaling-compute-hw=0
+ infer-dims: 3;224;224
\ No newline at end of file