This demo showcases Vehicle and License Plate Detection network followed by the Vehicle Attributes Recognition and License Plate Recognition networks applied on top of the detection results. You can use a set of the following pre-trained models with the demo:
vehicle-license-plate-detection-barrier-0106
or vehicle-license-plate-detection-barrier-0123
, which is primary detection network to find the vehicles and license platesvehicle-attributes-recognition-barrier-0039
, which is executed on top of the results from the first network and reports general vehicle attributes, for example, vehicle type (car/van/bus/track) and colorlicense-plate-recognition-barrier-0001
or license-plate-recognition-barrier-0007
, which is executed on top of the results from the first network and reports a string per recognized license plateFor more information about the pre-trained models, refer to the model documentation.
Other demo objectives are:
On the start-up, the application reads command line parameters and loads the specified networks. The Vehicle and License Plate Detection network is required, the other two are optional.
The core component of the application pipeline is the Worker class, which executes incoming instances of a Task
class. Task
is an abstract class that describes data to process and how to process the data. For example, a Task
can be to read a frame or to get detection results. There is a pool of Task
instances. These Task
s are awaiting to be executed. When a Task
from the pool is being executed, it may create and/or submit another Task
to the pool. Each Task
stores a smart pointer to an instance of VideoFrame
, which represents an image the Task
works with. When the sequence of Task
s is completed and none of the Task
s require a VideoFrame
instance, the VideoFrame
is destroyed. This triggers creation of a new sequence of Task
s. The pipeline of this demo executes the following sequence of Task
s:
Reader
, which reads a new frameInferTask
, which starts detection inferenceRectExtractor
, which waits for detection inference to complete and runs a classifier and a recognizerResAggregator
, which draws the results of the inference on the frameDrawer
, which shows the frame with the inference resultsAt the end of the sequence, the VideoFrame
is destroyed and the sequence starts again for the next frame.
NOTE: By default, Open Model Zoo demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the demo application or reconvert your model using the Model Optimizer tool with
--reverse_input_channels
argument specified. For more information about the argument, refer to When to Reverse Input Channels section of Converting a Model Using General Conversion Parameters
Running the application with the -h
option yields the following usage message:
Running the application with an empty list of options yields an error message.
To run the demo, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO Model Downloader or go to https://download.01.org/opencv/.
NOTE: Before running the demo with a trained model, make sure the model is converted to the Inference Engine format (*.xml + *.bin) using the Model Optimizer tool.
For example, to do inference on a GPU with the OpenVINO toolkit pre-trained models, run the following command:
To do inference for two video inputs using two asynchronous infer request on FPGA with the OpenVINO toolkit pre-trained models, run the following command:
NOTE: For the
-tag
option (HDDL plugin only), you must specify the number of VPUs for each network in thehddl_service.config
file located in the<INSTALL_DIR>/deployment_tools/inference_engine/external/hddl/config/
directory using the following tags:
tagDetect
for the Vehicle and License Plate Detection networktagAttr
for the Vehicle Attributes Recognition networktagLPR
for the License Plate Recognition networkFor example, to run the sample on one Intel® Vision Accelerator Design with Intel® Movidius™ VPUs Compact R card with eight Intel® Movidius™ X VPUs:
"service_settings":{"graph_tag_map":{"tagDetect": 6, "tagAttr": 1, "tagLPR": 1}}
If you build the Inference Engine with the OMP, you can use the following parameters for Heterogeneous scenarios:
OMP_NUM_THREADS
: Specifies number of threads to use. For heterogeneous scenarios with FPGA, when several inference requests are used asynchronously, limiting the number of CPU threads with OMP_NUM_THREADS
allows to avoid competing for resources between threads. For the Security Barrier Camera Demo, recommended value is OMP_NUM_THREADS=1
.KMP_BLOCKTIME
: Sets the time, in milliseconds, that a thread should wait, after completing the execution of a parallel region, before sleeping. The default value is 200ms, which is not optimal for the demo. Recommended value is KMP_BLOCKTIME=1
.The demo uses OpenCV to display the resulting frame with detections rendered as bounding boxes and text.
NOTE: On VPU devices (Intel® Movidius™ Neural Compute Stick, Intel® Neural Compute Stick 2, and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs) this demo has been tested on the following Model Downloader available topologies: >*
license-plate-recognition-barrier-0001
>*vehicle-attributes-recognition-barrier-0039
>*vehicle-license-plate-detection-barrier-0106
Other models may produce unexpected results on these devices.