This demo showcases a retail social distance application that detects people and measures the distance between them. If this distance is less than a value previously provided by the user, then an alert is triggered.
Other demo objectives are:
Video/Camera as inputs, via OpenCV*
Example of complex asynchronous networks pipelining: Person Re-Identification network is executed on top of the Person Detection results
Visualization of the minimum social distancing threshold violation
On the start-up, the application reads command line parameters and loads the specified networks. Both Person Detection and Re-Identification networks are required.
The core component of the application pipeline is the Worker class, which executes incoming instances of a Task class.
Task is an abstract class that describes data to process and how to process the data.
For example, a Task can be to read a frame or to get detection results.
There is a pool of Task instances. These Tasks are awaiting to be executed.
When a Task from the pool is being executed, it may create and/or submit another Task to the pool.
Each Task stores a smart pointer to an instance of VideoFrame, which represents an image the Task works with.
When the sequence of Tasks is completed and none of the Tasks require a VideoFrame instance, the VideoFrame is destroyed.
This triggers creation of a new sequence of Tasks.
The pipeline of this demo executes the following sequence of Tasks:
Reader, which reads a new frame
InferTask, which starts detection inference
DetectionsProcessor, which waits for detection inference to complete and runs a Re-Identification model
ResAggregator, which draws the results of the inference on the frame
Drawer, which shows the frame with the inference results
At the end of the sequence, the VideoFrame is destroyed and the sequence starts again for the next frame.
NOTE: By default, Open Model Zoo demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the demo application or reconvert your model using the Model Optimizer tool with the --reverse_input_channels argument specified. For more information about the argument, refer to When to Reverse Input Channels section of [Embedding Preprocessing Computation](@ref openvino_docs_MO_DG_Additional_Optimization_Use_Cases).
For demo input image or video files, refer to the section Media Files Available for Demos in the Open Model Zoo Demos Overview.
The list of models supported by the demo is in <omz_dir>/demos/social_distance_demo/cpp/models.lst file.
This file can be used as a parameter for Model Downloader and Converter to download and, if necessary, convert models to OpenVINO IR format (*.xml + *.bin).
To do inference for two video inputs using two asynchronous infer request on CPU with the OpenVINO toolkit pre-trained models, run the following command:
Social Distance C++ Demo¶
This demo showcases a retail social distance application that detects people and measures the distance between them. If this distance is less than a value previously provided by the user, then an alert is triggered.
Other demo objectives are:
Video/Camera as inputs, via OpenCV*
Example of complex asynchronous networks pipelining: Person Re-Identification network is executed on top of the Person Detection results
Visualization of the minimum social distancing threshold violation
How It Works¶
On the start-up, the application reads command line parameters and loads the specified networks. Both Person Detection and Re-Identification networks are required.
The core component of the application pipeline is the Worker class, which executes incoming instances of a
Task
class.Task
is an abstract class that describes data to process and how to process the data. For example, aTask
can be to read a frame or to get detection results. There is a pool ofTask
instances. TheseTask
s are awaiting to be executed. When aTask
from the pool is being executed, it may create and/or submit anotherTask
to the pool. EachTask
stores a smart pointer to an instance ofVideoFrame
, which represents an image theTask
works with. When the sequence ofTask
s is completed and none of theTask
s require aVideoFrame
instance, theVideoFrame
is destroyed. This triggers creation of a new sequence ofTask
s. The pipeline of this demo executes the following sequence ofTask
s:Reader
, which reads a new frameInferTask
, which starts detection inferenceDetectionsProcessor
, which waits for detection inference to complete and runs a Re-Identification modelResAggregator
, which draws the results of the inference on the frameDrawer
, which shows the frame with the inference resultsAt the end of the sequence, the
VideoFrame
is destroyed and the sequence starts again for the next frame.Preparing to Run¶
For demo input image or video files, refer to the section Media Files Available for Demos in the Open Model Zoo Demos Overview. The list of models supported by the demo is in
<omz_dir>/demos/social_distance_demo/cpp/models.lst
file. This file can be used as a parameter for Model Downloader and Converter to download and, if necessary, convert models to OpenVINO IR format (*.xml + *.bin).An example of using the Model Downloader:
An example of using the Model Converter:
Supported Models¶
person-detection-0200
person-detection-0201
person-detection-0202
person-detection-retail-0013
person-reidentification-retail-0277
person-reidentification-retail-0286
person-reidentification-retail-0287
person-reidentification-retail-0288
Running¶
Running the application with the
-h
option yields the following usage message:Running the application with an empty list of options yields an error message.
For example, to do inference on a GPU with the OpenVINO toolkit pre-trained models, run the following command:
To do inference for two video inputs using two asynchronous infer request on CPU with the OpenVINO toolkit pre-trained models, run the following command:
Demo Output¶
The demo uses OpenCV to display the resulting frame with detections rendered as bounding boxes and text. The demo reports:
FPS: average rate of video frame processing (frames per second).
Latency: average time required to process one frame (from reading the frame to displaying the results).
You can use these metrics to measure application-level performance.
See Also¶
Open Model Zoo Demos
Model Optimizer
Model Downloader