higher-hrnet-w32-human-pose-estimation

Use Case and High-Level Description

The HigherHRNet-W32 model is one of the HigherHRNet. HigherHRNet is a novel bottom-up human pose estimation method for learning scale-aware representations using high-resolution feature pyramids. The network uses HRNet as backbone, followed by one or more deconvolution modules to generate multi-resolution and high-resolution heatmaps. For every person in an image, the network detects a human pose: a body skeleton consisting of keypoints and connections between them. The pose may contain up to 17 keypoints: ears, eyes, nose, shoulders, elbows, wrists, hips, knees, and ankles. This is PyTorch* implementation pre-trained on COCO dataset. For details about implementation of model, check out the HigherHRNet: Scale-Aware Representation Learning for Bottom-Up Human Pose Estimation repository.

Specification

Metric

Value

Type

Human pose estimation

GFLOPs

92.8364

MParams

28.6180

Source framework

PyTorch*

Accuracy

Metric

Original model

Converted model

Average Precision (AP)

64.64%

64.64%

Model was tested on COCO dataset with val2017 split. These are the results of the accuracy check for single pass inference (without flip of image, which used by default in original repository)

Input

Original Model

Image, name - image, shape - 1, 3, 512, 512, format is B, C, H, W, where:

  • B - batch size

  • C - channel

  • H - height

  • W - width

Channel order is RGB. Mean values - [123.675, 116.28, 103.53], scale values - [58.395, 57.12, 57.375].

Converted Model

Image, name - image, shape - 1, 3, 512, 512, format is B, C, H, W, where:

  • B - batch size

  • C - channel

  • H - height

  • W - width

Channel order is BGR.

Output

The net outputs two blobs:

  • heatmaps of shape 1, 17, 256, 256 containing location heatmaps for keypoints of pose. Locations that are filtered out by non-maximum suppression algorithm have negated values assigned to them.

  • embeddings of shape 1, 17, 256, 256 containing associative embedding values, which are used for grouping individual keypoints into poses.

Download a Model and Convert it into OpenVINO™ IR Format

You can download models and if necessary convert them into OpenVINO™ IR format using the Model Downloader and other automation tools as shown in the examples below.

An example of using the Model Downloader:

omz_downloader --name <model_name>

An example of using the Model Converter:

omz_converter --name <model_name>

Demo usage

The model can be used in the following demos provided by the Open Model Zoo to show its capabilities: