human-pose-estimation-0006¶
Use Case and High-Level Description¶
This is a multi-person 2D pose estimation network based on the EfficientHRNet approach (that follows the Associative Embedding framework). For every person in an image, the network detects a human pose: a body skeleton consisting of keypoints and connections between them. The pose may contain up to 17 keypoints: ears, eyes, nose, shoulders, elbows, wrists, hips, knees, and ankles.
Example¶

Specification¶
Metric |
Value |
---|---|
Average Precision (AP) |
51.1% |
GFlops |
8.844 |
MParams |
8.1506 |
Source framework |
PyTorch* |
Average Precision metric described in COCO Keypoint Evaluation site.
Inputs¶
Image, name: input
, shape: 1, 3, 352, 352
in the B, C, H, W
format, where:
B
- batch sizeC
- number of channelsH
- image heightW
- image width
Expected color order is BGR
.
Outputs¶
The net outputs are two blobs:
heatmaps
of shape1, 17, 176, 176
containing location heatmaps for keypoints of all types. Locations that are filtered out by non-maximum suppression algorithm have negated values assigned to them.embeddings
of shape1, 17, 176, 176, 1
containing associative embedding values, which are used for grouping individual keypoints into poses.
Legal Information¶
[*] Other names and brands may be claimed as the property of others.