Skip to content

Commit

Permalink
Merge pull request #30 from antoniomtz/loss-prevention
Browse files Browse the repository at this point in the history
Loss prevention
  • Loading branch information
antoniomtz authored Sep 23, 2024
2 parents 8f3029f + 0d38119 commit 836ba97
Show file tree
Hide file tree
Showing 4 changed files with 211 additions and 0 deletions.
36 changes: 36 additions & 0 deletions docs_src/use-cases/loss-prevention/advanced.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# Advanced Settings

To further customize a loss prevention pipeline, let's add more variables to the execution:

!!! Example

```bash
make PIPELINE_SCRIPT=yolov8s_roi.sh RESULTS_DIR="../render_results" run-render-mode
```

The above command will execute a DLStreamer pipeline using YOLOv8s model for object detection on a region of interest (ROI) with object tracking mechanism.

## Modify ROI coordinates

To modify the ROI coordinates, locate the file `roi.json` under src/pipelines/roi.json. Since the "objects" attribute is an array, it is possible to add multiple ROIs.

```json
[
{
"objects": [
{
"detection": {
"label": "ROI1"
},
"x": 0,
"y": 0,
"w": 620,
"h": 1080
}
]
}
]
```
For enviroments variables, follow the same tutorial as the automated self checkout [HERE](../automated-self-checkout/advanced.md)


158 changes: 158 additions & 0 deletions docs_src/use-cases/loss-prevention/getting_started.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,158 @@
# Getting Started

## Step by step instructions:

1. Download the models using download_models/downloadModels.sh

```bash
make download-models
```

2. Update github submodules

```bash
make update-submodules
```

3. Download sample videos used by the performance tools

```bash
make download-sample-videos
```

4. Build the demo Docker image

```bash
make build
```

5. Start Loss prevention using the Docker Compose file. The Docker Compose also includes an RTSP camera simulator that will infinitely loop through the sample videos downloaded in step 3.

```bash
make run-render-mode
```

6. Verify Docker containers

```bash
docker ps --format 'table{{.Names}}\t{{.Status}}\t{{.Image}}'
```
Result:
```bash
NAMES STATUS IMAGE
camera-simulator0 Up 17 seconds jrottenberg/ffmpeg:4.1-alpine
src-OvmsClientGst-1 Up 17 seconds dlstreamer:dev
camera-simulator Up 17 seconds aler9/rtsp-simple-server
```

7. Verify Results

After starting Automated Self Checkout you will begin to see result files being written into the results/ directory. Here are example outputs from the 3 log files.

gst-launch_<time>_gst.log
```
/GstPipeline:pipeline0/GstGvaWatermark:gvawatermark0/GstCapsFilter:capsfilter1: caps = video/x-raw(memory:VASurface), format=(string)RGBA
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstXImageSink:ximagesink0: sync = true
Got context from element 'vaapipostproc1': gst.vaapi.Display=context, gst.vaapi.Display=(GstVaapiDisplay)"\(GstVaapiDisplayGLX\)\ vaapidisplayglx0", gst.vaapi.Display.GObject=(GstObject)"\(GstVaapiDisplayGLX\)\ vaapidisplayglx0";
Progress: (open) Opening Stream
Pipeline is PREROLLED ...
Prerolled, waiting for progress to finish...
Progress: (connect) Connecting to rtsp://localhost:8554/camera_0
Progress: (open) Retrieving server options
Progress: (open) Retrieving media info
Progress: (request) SETUP stream 0
```

pipeline<time>_gst.log
```
14.58
14.58
15.47
15.47
15.10
15.10
14.60
14.60
14.88
14.88
```

r<time>_gst.jsonl
```json
{
"objects": [
{
"detection": {
"bounding_box": {
"x_max": 0.7873924346958825,
"x_min": 0.6701603093745723,
"y_max": 0.7915918938548927,
"y_min": 0.14599881349270305
},
"confidence": 0.8677337765693665,
"label": "bottle",
"label_id": 39
},
"h": 697,
"region_id": 610,
"roi_type": "bottle",
"w": 225,
"x": 1287,
"y": 158
},
{
"detection": {
"bounding_box": {
"x_max": 0.3221945836811382,
"x_min": 0.19950163649114616,
"y_max": 0.7924592239981934,
"y_min": 0.1336837231479251
},
"confidence": 0.8625879287719727,
"label": "bottle",
"label_id": 39
},
"h": 711,
"region_id": 611,
"roi_type": "bottle",
"w": 236,
"x": 383,
"y": 144
},
{
"detection": {
"bounding_box": {
"x_max": 0.5730873789069046,
"x_min": 0.42000878963365595,
"y_max": 0.9749763191740435,
"y_min": 0.12431765065780453
},
"confidence": 0.854443371295929,
"label": "bottle",
"label_id": 39
},
"h": 919,
"region_id": 612,
"roi_type": "bottle",
"w": 294,
"x": 806,
"y": 134
}
],
"resolution": {
"height": 1080,
"width": 1920
},
"timestamp": 755106652
}
```

8. Stop the demo using docker compose down
```bash
make down
```


## [Proceed to Advanced Settings](advanced.md)
13 changes: 13 additions & 0 deletions docs_src/use-cases/loss-prevention/loss-prevention.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# Intel® Loss Prevention Reference Package

## Overview

As computer vision technology becomes more mainstream in industrial and retail settings, using it for loss prevention is becoming increasingly complex. These vision workloads are substantial and require multiple stages of processing. For example, a typical loss prevention pipeline might capture video data, define regions of interest, implement tracking to monitor which products customers interact with, analyze the data using models like YOLOv5 and EfficientNet, and then post-process it to generate metadata that highlights which products are being purchased or potentially stolen. This is just one example of how such models and workflows can be utilized.

Implementing loss prevention solutions in retail isn't straightforward. Retailers, independent software vendors (ISVs), and system integrators (SIs) need a solid understanding of both hardware and software, as well as the costs involved in setting up and scaling these systems. Given the data-intensive nature of vision workloads, systems must be carefully designed, built, and deployed with numerous considerations in mind. Effectively combating shrinkage requires the right mix of hardware, software, and optimized configurations.

The Intel® Loss Prevention Reference Package is designed to help with this. It provides the essential components needed to develop and deploy a loss prevention solution using Intel® hardware, software, and open-source tools. This reference implementation includes a pre-configured pipeline that's optimized for Intel® hardware, simplifying the setup of an effective computer vision-based loss prevention system for retailers.

## Next Steps

To begin using the loss prevention solution you can follow the [Getting Started Guide](./getting_started.md).
4 changes: 4 additions & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,10 @@ nav:
- AI Connect for Scientific Data (AiCSD):
- Overview: 'use-cases/AiCSD/aicsd.md'
- GRPC Yolov5s Pipeline: 'use-cases/AiCSD/pipeline-grpc-go.md'
- Loss Prevention:
- Overview: 'use-cases/loss-prevention/loss-prevention.md'
- Getting Started: 'use-cases/loss-prevention/getting_started.md'
- Advanced Settings: 'use-cases/loss-prevention/advanced.md'
- Releases: 'releasenotes.md'
- Troubleshooting: 'troubleshooting.md'
extra_css:
Expand Down

0 comments on commit 836ba97

Please sign in to comment.