This project consists on showcasing the advantages of the Intel’s OpenVINO toolkit. We will develop a Blindspot Assistance case scenario, where we will define a blindspot area to detect vehicles and pedestrians, up to 4 cameras simultaneously. For that, we will use the OpenVINO toolkit and OpenCV.
Using OpenVino’s model detection we can easily detect pedestrians and vehicles with great accuracy. We are currently using a pedestrian and vehicle detection model that is included with OpenVino out-of-the-box:
-
pedestrian-and-vehicle-detector-adas-0001
To run the application in this tutorial, the OpenVINO™ toolkit (Release R2 or greater) and its dependencies must already be installed and verified using the included demos. Installation instructions may be found at: https://software.intel.com/en-us/articles/OpenVINO-Install-Linux
If to be used, any optional hardware must also be installed and verified including:
-
USB camera - Standard USB Video Class (UVC) camera.
-
Intel® Core™ CPU with integrated graphics.
-
VPU - USB Intel® Movidius™ Neural Compute Stick and what is being referred to as "Myriad"
A summary of what is needed:
-
Target and development platforms meeting the requirements described in the "System Requirements" section of the OpenVINO™ toolkit documentation which may be found at: https://software.intel.com/en-us/openvino-toolkit
Note: While writing this tutorial, an Intel® i7-8550U with Intel® HD graphics 520 GPU was used as both the development and target platform.
-
Optional:
-
Intel® Movidius™ Neural Compute Stick
-
USB UVC camera
-
Intel® Core™ CPU with integrated graphics.
-
-
OpenVINO™ toolkit supported Linux operating system. This tutorial was run on 64-bit Ubuntu 18.04.3 LTS updated to kernel 4.15.0-66 following the OpenVINO™ toolkit installation instructions.
-
The latest OpenVINO™ toolkit installed and verified. Supported versions +2019 R2. (Lastest version supported 2019 R3.1).
-
Git(git) for downloading from the GitHub repository.
By now you should have completed the Linux installation guide for the OpenVINO™ toolkit, however before continuing, please ensure:
-
That after installing the OpenVINO™ toolkit you have run the supplied demo samples
-
If you have and intend to use a GPU: You have installed and tested the GPU drivers
-
If you have and intend to use a USB camera: You have connected and tested the USB camera
-
If you have and intend to use a Myriad: You have connected and tested the USB Intel® Movidius™ Neural Compute Stick
-
That your development platform is connected to a network and has Internet access. To download all the files for this tutorial, you will need to access GitHub on the Internet.
-
Docker. To install on Ubuntu, run:
sudo snap install docker
sudo groupadd docker
sudo usermod -aG docker $USER
1. Clone the repository at desired location:
git clone https://github.com/incluit/OpenVino-Blindspot-Assistance.git
2. Change to the top git repository:
cd OpenVino-Blindspot-Assistance
3. Build the docker and blindspot:
make docker-build
make docker-run
From the outside the Blindspot Assistance Docker, verify all existing options for the example use cases by running the application with the -h
option to see the usage message:
./blindspot-assistance -h
Options:
-h Print a usage message.
-m "<path>" Required. Path to an .xml file with a trained model.
-l "<absolute_path>" Required for CPU custom layers. Absolute path to a shared library with the kernel implementations.
Or
-c "<absolute_path>" Required for GPU custom kernels. Absolute path to an .xml file with the kernel descriptions.
-d "<device>" Optional. Specify the target device for a network (the list of available devices is shown below). Default value is CPU. Use "-d HETERO:<comma-separated_devices_list>" format to specify HETERO plugin. The demo looks for a suitable plugin for a specified device.
-nc Optional. Maximum number of processed camera inputs (web cameras).
-bs Optional. Batch size for processing (the number of frames processed per infer request).
-nireq Optional. Number of infer requests.
-n_iqs Optional. Frame queue size for input channels.
-fps_sp Optional. FPS measurement sampling period between timepoints in msec.
-n_sp Optional. Number of sampling periods.
-pc Optional. Enable per-layer performance report.
-t Optional. Probability threshold for detections.
-no_show Optional. Do not show processed video.
-no_show_d Optional. Optional. Do not show detected objects.
-show_stats Optional. Enable statistics report.
-duplicate_num Optional. Enable and specify the number of channels additionally copied from real sources.
-real_input_fps Optional. Disable input frames caching for maximum throughput pipeline.
-i Optional. Specify full path to input video files.
-loop_video Optional. Enable playing video on a loop.
-calibration Optional. Camera calibration.
-show_calibration Optional. Show camera calibration.
-alerts Optional. Send alerts to AlertManager through the proxy.
1. Driving Scenario: detection area configured by default, using CPU:
./blindspot-assistance -m ../../../models/FP32/pedestrian-and-vehicle-detector-adas-0001.xml -d HETERO:CPU,GPU -i ../../../data/BlindspotFront.mp4 ../../../data/BlindspotLeft.mp4 ../../../data/BlindspotRear.mp4 ../../../data/BlindspotRight.mp4
2. Driving Scenario: detection area configured by default, using GPU:
./blindspot-assistance -m ../../../models/FP32/pedestrian-and-vehicle-detector-adas-0001.xml -d GPU -i ../../../data/BlindspotFront.mp4 ../../../data/BlindspotLeft.mp4 ../../../data/BlindspotRear.mp4 ../../../data/BlindspotRight.mp4
3. Driving Scenario: detection area configured by default, using MYRIAD:
./blindspot-assistance -m ../../../models/FP32/pedestrian-and-vehicle-detector-adas-0001.xml -d GPU -i ../../../data/BlindspotFront.mp4 ../../../data/BlindspotLeft.mp4 ../../../data/BlindspotRear.mp4 ../../../data/BlindspotRight.mp4
3. Driving Scenario: configuring detection area - visualization disabled:
./blindspot-assistance -m ../../../models/FP32/pedestrian-and-vehicle-detector-adas-0001.xml -i ../../../data/BlindspotFront.mp4 ../../../data/BlindspotLeft.mp4 ../../../data/BlindspotRear.mp4 ../../../data/BlindspotRight.mp4 -calibration
4. Driving Scenario: configuring detection area - visualization enabled:
./blindspot-assistance -m ../../../models/FP32/pedestrian-and-vehicle-detector-adas-0001.xml -i ../../../data/BlindspotFront.mp4 ../../../data/BlindspotLeft.mp4 ../../../data/BlindspotRear.mp4 ../../../data/BlindspotRight.mp4 -calibration -show_calibration
5. Running Blindspot Assistances along with Alert Manager Microservice:
For enabling the Alert Manager Microservice check the following link: https://github.com/incluit/OpenVino-Alert-Manager
./blindspot-assistance -m ../../../models/FP32/pedestrian-and-vehicle-detector-adas-0001.xml -i ../../../data/BlindspotFront.mp4 ../../../data/BlindspotLeft.mp4 ../../../data/BlindspotRear.mp4 ../../../data/BlindspotRight.mp4 -calibration -show_calibration -alerts
6. Running Blindspot Assistances along with Alert Manager Microservice and Cloud Connector Microservice:
For enabling the Cloud Connector Microservice check the following link: https://github.com/incluit/OpenVino-Cloud-Connector
./blindspot-assistance -m ../../../models/FP32/pedestrian-and-vehicle-detector-adas-0001.xml -i ../../../data/BlindspotFront.mp4 ../../../data/BlindspotLeft.mp4 ../../../data/BlindspotRear.mp4 ../../../data/BlindspotRight.mp4 -calibration -show_calibration -alerts
You can also experiment by using different detection models, being the ones available up to now:
-
person-vehicle-bike-detection-crossroad-0078
-
person-vehicle-bike-detection-crossroad-1016
./blindspot-assistance -m ../../../models/FP32/person-vehicle-bike-detection-crossroad-0078.xml -d HETERO:CPU,GPU -i ../../../data/BlindspotFront.mp4 ../../../data/BlindspotLeft.mp4 ../../../data/BlindspotRear.mp4 ../../../data/BlindspotRight.mp4
./blindspot-assistance -m ../../../models/FP32/person-vehicle-bike-detection-crossroad-1016.xml -d HETERO:CPU,GPU -i ../../../data/BlindspotFront.mp4 ../../../data/BlindspotLeft.mp4 ../../../data/BlindspotRear.mp4 ../../../data/BlindspotRight.mp4
1. If you receive the following message inside the Docker:
Gtk-WARNING **: 13:01:52.097: cannot open display: :0
Go outside the Docker container, run:
xhost +
Enter the Docker container and run it again.
2. If you get out of the Docker, you can run it again:
docker start blindspotcont
docker exec -it blindspotcont /bin/bash
== [Optional] AWS (In Progress)
We also plan to send the data through ZMQ using AWS IoT-Core. Using AWS may incur in a cost, so this will also be optional for you to run with/without it.