Human Pose Estimation C++ Demo¶
This demo showcases the work of multi-person 2D pose estimation algorithm. The task is to predict a pose: body skeleton, which consists of keypoints and connections between them, for every person in an input video. The pose may contain up to 18 keypoints: ears, eyes, nose, neck, shoulders, elbows, wrists, hips, knees, and ankles. Some of potential use cases of the algorithm are action recognition and behavior understanding.
Other demo objectives are:
Video/Camera as inputs, via OpenCV*
Visualization of all estimated poses
How It Works¶
On startup, the application reads command line parameters and loads human pose estimation model. Upon getting a frame from the OpenCV VideoCapture, input frame height is scaled to model height, frame width is scaled to preserve initial aspect ratio and padded to multiple of 8, then application executes human pose estimation algorithm and displays the results.
By default, Open Model Zoo demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the demo application or reconvert your model using the Model Optimizer tool with the
--reverse_input_channels argument specified. For more information about the argument, refer to When to Reverse Input Channels section of Converting a Model Using General Conversion Parameters.
Preparing to Run¶
For demo input image or video files, refer to the section Media Files Available for Demos in the Open Model Zoo Demos Overview. The list of models supported by the demo is in
<omz_dir>/demos/human_pose_estimation_demo/cpp/models.lst file. This file can be used as a parameter for Model Downloader and Converter to download and, if necessary, convert models to OpenVINO Inference Engine format (*.xml + *.bin).
An example of using the Model Downloader:
python3 <omz_dir>/tools/downloader/downloader.py --list models.lst
An example of using the Model Converter:
python3 <omz_dir>/tools/downloader/converter.py --list models.lst
architecture_type = openpose
architecture_type = ae
architecture_type = higherhrnet
Running the application with the
-h option yields the following usage message:
InferenceEngine: API version ............ <version> Build .................. <number> human_pose_estimation_demo [OPTION] Options: -h Print a usage message. -at "<type>" Required. Type of the network, either 'ae' for Associative Embedding, 'higherhrnet' for HigherHRNet models based on ae or 'openpose' for OpenPose. -i Required. An input to process. The input must be a single image, a folder of images, video file or camera id. -m "<path>" Required. Path to an .xml file with a trained model. -o "<path>" Optional. Name of the output file(s) to save. -limit "<num>" Optional. Number of frames to store in output. If 0 is set, all frames are stored. -tsize Optional. Target input size. -l "<absolute_path>" Required for CPU custom layers. Absolute path to a shared library with the kernel implementations. Or -c "<absolute_path>" Required for GPU custom kernels. Absolute path to the .xml file with the kernel descriptions. -d "<device>" Optional. Specify the target device to infer on (the list of available devices is shown below). Default value is CPU. Use "-d HETERO:<comma-separated_devices_list>" format to specify HETERO plugin. The demo will look for a suitable plugin for a specified device. -pc Optional. Enables per-layer performance report. -t Optional. Probability threshold for poses filtering. -nireq "<integer>" Optional. Number of infer requests. If this option is omitted, number of infer requests is determined automatically. -nthreads "<integer>" Optional. Number of threads. -nstreams Optional. Number of streams to use for inference on the CPU or/and GPU in throughput mode (for HETERO and MULTI device cases use format <device1>:<nstreams1>,<device2>:<nstreams2> or just <nstreams>) -loop Optional. Enable reading the input in a loop. -no_show Optional. Don't show output. -output_resolution Optional. Specify the maximum output window resolution in (width x height) format. Example: 1280x720. Input frame size used by default. -u Optional. List of monitors to show initially.
Running the application with an empty list of options yields an error message.
For example, to do inference on a CPU, run the following command:
./human_pose_estimation_demo -i <path_to_video>/input_video.mp4 -m <path_to_model>/human-pose-estimation-0001.xml -d CPU -at openpose
> NOTE : If you provide a single image as an input, the demo processes and renders it quickly, then exits. To continuously visualize inference results on the screen, apply the
loop option, which enforces processing a single image in a loop.
You can save processed results to a Motion JPEG AVI file or separate JPEG or PNG files using the
To save processed results in an AVI file, specify the name of the output file with
aviextension, for example:
To save processed results as images, specify the template name of the output image file with
pngextension, for example:
-o output_%03d.jpg. The actual file names are constructed from the template at runtime by replacing regular expression
%03dwith the frame number, resulting in the following:
output_001.jpg, and so on. To avoid disk space overrun in case of continuous input stream, like camera, you can limit the amount of data stored in the output file(s) with the
limitoption. The default value is 1000. To change it, you can apply the
-limit Noption, where
Nis the number of frames to store.
> NOTE : Windows* systems may not have the Motion JPEG codec installed by default. If this is the case, you can download OpenCV FFMPEG back end using the PowerShell script provided with the OpenVINO install package and located at
<INSTALL_DIR>/opencv/ffmpeg-download.ps1. The script should be run with administrative privileges if OpenVINO is installed in a system protected folder (this is a typical case). Alternatively, you can save results as images.
The demo uses OpenCV to display the resulting frame with estimated poses and text report of FPS (frames per second) performance for the human pose estimation demo.