This demo showcases Object Detection and Async API. Async API usage can improve overall frame-rate of the application, because rather than wait for inference to complete, the app can continue doing things on the host, while accelerator is busy. Specifically, this demo keeps the number of Infer Requests that you have set using nireq
flag. While some of the Infer Requests are processed by IE, the other ones can be filled with new frame data and asynchronously started or the next output can be taken from the Infer Request and displayed.
NOTE: This topic describes usage of C++ implementation of the Object Detection Demo Async API.
The technique can be generalized to any available parallel slack, for example, doing inference and simultaneously encoding the resulting (previous) frames or running further inference, like some emotion detection on top of the face detection results. There are important performance caveats though, for example the tasks that run in parallel should try to avoid oversubscribing the shared compute resources. For example, if the inference is performed on the FPGA, and the CPU is essentially idle, than it makes sense to do things on the CPU in parallel. But if the inference is performed say on the GPU, than it can take little gain to do the (resulting video) encoding on the same GPU in parallel, because the device is already busy.
This and other performance implications and tips for the Async API are covered in the Optimization Guide
Other demo objectives are:
-labels
option) or class number (if no file is provided)On the start-up, the application reads command line parameters and loads a network to the Inference Engine. Upon getting a frame from the OpenCV VideoCapture it performs inference and displays the results.
NOTE: By default, Open Model Zoo demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the demo application or reconvert your model using the Model Optimizer tool with
--reverse_input_channels
argument specified. For more information about the argument, refer to When to Reverse Input Channels section of Converting a Model Using General Conversion Parameters.
This demo operates in asynchronous manner by using "Infer Requests" that encapsulate the inputs/outputs and separates scheduling and waiting for result, as shown in code mockup below:
For more details on the requests-based Inference Engine API, including the Async execution, refer to Integrate the Inference Engine New Request API with Your Application.
Running the application with the -h
option yields the following usage message:
Running the application with the empty list of options yields the usage message given above and an error message.
To run the demo, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO Model Downloader. The list of models supported by the demo is in the models.lst
file in the demo's directory.
NOTE: Before running the demo with a trained model, make sure the model is converted to the Inference Engine format (*.xml + *.bin) using the Model Optimizer tool.
If labels file is used, it should correspond to model output. Demo suggests labels listed in the file to be indexed from 0, one line - one label (i.e. very first line contains label for ID 0). Note that some models may return labels IDs in range 1..N, in this case label file should contain "background" label at the very first line.
You can use the following command to do inference on GPU with a pre-trained object detection model:
The demo uses OpenCV to display the resulting frame with detections (rendered as bounding boxes and labels, if provided). The demo reports: