Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmark multiple different pipelines running in parallel #18

Merged
merged 6 commits into from
Nov 18, 2024

Conversation

jim-wang-intel
Copy link
Contributor

@jim-wang-intel jim-wang-intel commented Nov 4, 2024

PR Checklist

  • Added label to the Pull Request for easier discoverability and search
  • Commit Message meets guidelines as indicated in the URL https://github.com/intel-retail/loss-prevention/blob/main/CONTRIBUTING.md
  • Every commit is a single defect fix and does not mix feature addition or changes
  • Unit Tests have been added for new changes
  • Updated Documentation as relevant to the changes
  • All commented code has been removed
  • If you've added a dependency, you've ensured license is compatible with repository license and clearly outlined the added dependency.
  • PR change contains code related to security
  • PR introduces changes that breaks compatibility with other modules (If YES, please provide details below)

What are you changing?

This PR demonstrate the possibility to do the benchmarks for two (or multiple) different purposes of CV pipelines running in parallel. For more details, check this quick documentation.

Issue this PR will close

close: #issue_number

Anything the reviewer should know when reviewing this PR?

Test Instructions if applicable

  • git clone this PR
  • in loss-prevention repo, please re-build the loss-prevent docker image and make sure they are from this PR:
docker rmi loss-prevention:dev
make build
  • get the latest changes from performance-tools repo via:
make update-submodules
  • For regular pipeline benchmarks run
make DOCKER_COMPOSE=docker-compose-2-clients.yml BENCHMARK_DURATION=90 benchmark

note that BENCHMARK_DURATION increase usually to a bigger value as more pipelines running in parallel in this docker-compose example (docker-compose-2-clients.yml ) tend to slow down the system and so need more time for pipelines to be stabilized. For my NUC box, i need to increase from the default value 45 to at least 90 to gather more meaningful benchmark summary result.

here are some sample outputs you will expect to see:

......
/home/jimwang/go/src/github.com/loss-prevention/results/r20241113210117421731269_gst2.jsonl
['20241113210117421731269', '2']
/home/jimwang/go/src/github.com/loss-prevention/results/r20241113210117421731269_gst2.jsonl
1731531799.0876124
2024-11-13 14:03:19.087612
11/13/2024 14:03:087612
parsing last modified log time
/home/jimwang/go/src/github.com/loss-prevention/results/r20241113210117383850131_gst1.jsonl
['20241113210117383850131', '1']
/home/jimwang/go/src/github.com/loss-prevention/results/r20241113210117383850131_gst1.jsonl
1731531799.1356127
2024-11-13 14:03:19.135613
11/13/2024 14:03:135613
parsing last modified log time
/home/jimwang/go/src/github.com/loss-prevention/results/r20241112231434922008249_gst2.jsonl
['20241112231434922008249', '2']
/home/jimwang/go/src/github.com/loss-prevention/results/r20241112231434922008249_gst2.jsonl
1731453486.5691512
2024-11-12 16:18:06.569151
11/12/2024 16:18:569151
parsing last modified log time
/home/jimwang/go/src/github.com/loss-prevention/results/r20241112231434930583977_gst1.jsonl
['20241112231434930583977', '1']
/home/jimwang/go/src/github.com/loss-prevention/results/r20241112231434930583977_gst1.jsonl
1731453486.7991536
2024-11-12 16:18:06.799154
11/12/2024 16:18:799154
parsing CPU usages
parsing memory usage
parsing disk bandwidth
parsing memory bandwidth
parsing power usage
Loss Prevention benchmark results are saved in /home/jimwang/go/src/github.com/loss-prevention/results/summary.csv file
====== Loss prevention benchmark results summary: 
Camera_20241112231434930583977 FPS,17.06688654353562
Camera_20241113210117383850131 FPS,17.74093896713615
Camera_20241113210117421731269 FPS,1.4735915492957747
Camera_20241112231434922008249 FPS,1.4285915492957746
Camera_20241113210117421731269 Last log update,11/13/2024 14:03:087612
Camera_20241113210117383850131 Last log update,11/13/2024 14:03:135613
Camera_20241112231434922008249 Last log update,11/12/2024 16:18:569151
Camera_20241112231434930583977 Last log update,11/12/2024 16:18:799154
CPU Utilization %,96.40241666666667
Memory Utilization %,24.852492999069177
Disk Read MB/s,0.0
Disk Write MB/s,0.0029966115702479337
S0 Memory Bandwidth Usage MB/s,5459.89898989899
S0 Power Draw W,10.247979797979799
make clean-benchmark-results

so that we have a clean start.

  • to run stream density for multiple parallel running pipelines example:
make DOCKER_COMPOSE=docker-compose-2-clients.yml BENCHMARK_DURATION=90 TARGET_FPS="10.95 2.95" CONTAINER_NAMES="gst1 gst2" benchmark-stream-density

it should runs ok but this will take a while (about 10-20 minutes), and you should see the final results for the stream density on this two running parallel pipelines (one service is running yolov5s pipeline, and the other is running yolvo8_roi.sh pipeline).

Example console output:

......
yolov8s already exists.
./download_models/downloadModels.sh	
yolov5s FP16-INT8 model already exists in object_detection/yolov5s/FP16-INT8/yolov5s.bin, skip downloading...
efficientnet FP32-INT8 model already exists, skip downloading...
horizontalText0002 FP16-INT8 model already exists, skip downloading...
textRec0012 FP16-INT8 model already exists, skip downloading...
object_detection/person-detection-0200/FP16-INT8/person-detection-0200.bin
personDetection0106 FP16-INT8 model already exists, skip downloading...
object_detection/face-detection-retail-0005/FP16-INT8/face-detection-retail-0005.bin
faceDetectionRetail0005 FP16-INT8 model already exists, skip downloading...
object_classification/age-gender-recognition-retail-0013/FP16-INT8/age-gender-recognition-retail-0013.bin
ageGenderRecognitionRetail0013 FP16-INT8 model already exists, skip downloading...
cd performance-tools/benchmark-scripts && python benchmark.py --compose_file ../../src/docker-compose-2-clients.yml \
--target_fps 10.95 2.95 --container_names gst1 gst2 \
--density_increment 1 --results_dir /home/jimwang/go/src/github.com/loss-prevention/results
Starting workload(s)
starting stream density for multiple running pipelines...
Completed stream density for target FPS: 10.95 in container: gst1. Max pipelines: 2, Met target FPS? True
Completed stream density for target FPS: 2.95 in container: gst2. Max pipelines: 1, Met target FPS? False
workloads finished...

If the there are associated PRs in other repositories, please link them here (i.e. intel-retail/loss-prevention )

…g different pipelines at the same time

Signed-off-by: Jim Wang <yutsung.jim.wang@intel.com>
Signed-off-by: Jim Wang <yutsung.jim.wang@intel.com>
Signed-off-by: Jim Wang <yutsung.jim.wang@intel.com>
Signed-off-by: Jim Wang <yutsung.jim.wang@intel.com>
@jim-wang-intel jim-wang-intel changed the title Benchmark different pipelines in parallel Benchmark multiple different pipelines running in parallel Nov 8, 2024
@jim-wang-intel jim-wang-intel added enhancement New feature or request 3.2 labels Nov 8, 2024
@jim-wang-intel
Copy link
Contributor Author

jim-wang-intel commented Nov 8, 2024

NOTE: This PR stays in draft mode until performance-tool PR for stream-density intel-retail/performance-tools#72 is merged because of dependency on that- #72 has been merged so now it is ready for review.

… added loss-prevention container

Signed-off-by: Jim Wang <yutsung.jim.wang@intel.com>
@jim-wang-intel jim-wang-intel marked this pull request as ready for review November 13, 2024 21:12
@jim-wang-intel jim-wang-intel linked an issue Nov 13, 2024 that may be closed by this pull request
Copy link
Contributor

@seanohair22 seanohair22 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Tested various pipelines and multiple stream density target fps

volumes:
- ${RETAIL_USE_CASE_ROOT:-..}/performance-tools/sample-media:/home/pipeline-server/sample-media

OvmsClientGst1:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This container isn't OpenVINO Model Server, we should change the name of it. Maybe GstClient

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed.

benchmark.md Outdated
@@ -38,6 +38,45 @@ make benchmark-stream-density
!!! Note
For more details on how this works, you can check the documentation of performance-tools in [Benchmark Stream Density for CV Pipelines](https://github.com/intel-retail/documentation/blob/main/docs_src/performance-tools/benchmark.md#benchmark-stream-density-for-cv-pipelines) section.

### Benchmark for multiple pipelines in parallel

There is an example docker-compose file under src/ directory, named `docker-compose-2-clients.yml` that can be used to show case both of benchmarks of parallel running pipelines and stream density benchmarks of running pipelines. This docker-compose file contains two different running pipelines: one is running yolov5s pipeline and the other one is yolov8 region of interests pipeline. Follow the follow command examples to do the benchmarks:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
There is an example docker-compose file under src/ directory, named `docker-compose-2-clients.yml` that can be used to show case both of benchmarks of parallel running pipelines and stream density benchmarks of running pipelines. This docker-compose file contains two different running pipelines: one is running yolov5s pipeline and the other one is yolov8 region of interests pipeline. Follow the follow command examples to do the benchmarks:
There is an example docker-compose file under src/ directory, named `docker-compose-2-clients.yml` that can be used to show case both of benchmarks of parallel running pipelines and stream density benchmarks of running pipelines. This docker-compose file contains two different running pipelines: one is running yolov5s pipeline and the other one is yolov8 region of interests pipeline. Use the following command examples to do the benchmarks:

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks, fixed.

Signed-off-by: Jim Wang <yutsung.jim.wang@intel.com>
@jim-wang-intel jim-wang-intel merged commit 8c9e162 into intel-retail:main Nov 18, 2024
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
3.2 enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Spike: Run performance tool with different use cases in parallel
3 participants