A human gesture recognition model for the Jester dataset recognition scenario (gesture-level recognition). The model uses an S3D framework with MobileNet V3 backbone. Please refer to the Jester* dataset specification to see the list of gestures that are recognized by this model.
The model accepts a stack of frames (8 frames) sampled with a constant framerate (15 FPS) and produces a prediction on the input clip.
Metric | Value |
---|---|
Top-1 accuracy (continuous Jester*) | 93.58% |
GFlops | 4.2269 |
MParams | 4.1128 |
Source framework | PyTorch* |
Batch of images of the shape [1x3x8x224x224] in the [BxCxTxHxW] format, where:
B
- batch sizeC
- channelT
- sequence lengthH
- heightW
- widthChannel order is RGB
.
Batch of images of the shape [1x3x8x224x224] in the [BxCxTxHxW] format, where:
B
- batch sizeC
- channelT
- sequence lengthH
- heightW
- widthChannel order is RGB
.
The model outputs a tensor with the shape [Bx27], each row is a logits vector of performed Jester* gestures.
Blob of the shape [1, 27] in the [BxC] format, where:
B
- batch sizeC
- predicted logits sizeBlob of the shape [1, 27] in the [BxC] format, where:
B
- batch sizeC
- predicted logits sizeYou can download models and if necessary convert them into Inference Engine format using the Model Downloader and other automation tools as shown in the examples below.
An example of using the Model Downloader:
An example of using the Model Converter:
[*] Other names and brands may be claimed as the property of others.
The original model is distributed under the Apache License 2.0.