This demo demonstrates how to run Gesture (e.g. American Sign Language (ASL) gestures) Recognition models using OpenVINO™ toolkit.
The demo application expects an gesture recognition model in the Intermediate Representation (IR) format.
As input, the demo application takes:
--input
--class_map
The demo workflow is the following:
NOTE: By default, Open Model Zoo demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the demo application or reconvert your model using the Model Optimizer tool with
--reverse_input_channels
argument specified. For more information about the argument, refer to When to Reverse Input Channels section of Converting a Model Using General Conversion Parameters.
Run the application with the -h
option to see the following usage message:
Running the application with an empty list of options yields the short version of the usage message and an error message.
To run the demo, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO Model Downloader. The list of models supported by the demo is in the models.lst
file in the demo's directory.
NOTE: Before running the demo with a trained model, make sure the model is converted to the Inference Engine format (
*.xml
+*.bin
) using the Model Optimizer tool.
To run the demo, please provide paths to the gesture recognition and person detection models in the IR format, to a file with class names, and to an input video:
The demo starts in person tracking mode and to switch it in the action recognition mode you should press 0-9
button with appropriate detection ID (the number in top-left of each bounding box). If frame contains only one person, they will be chosen automatically. After that you can switch back to tracking mode by pressing space button.
An example of file with class names can be found here.
NOTE: To run the demo application with video examples of gestures specify the
-s
key with valid path to the directory with video samples (you can find some ASL gesture video samples here). The name of each video sample should be the valid name of gesture from the./classes.json
file. To navigate between samples use 'f' and 'b' keys for iterating next and previous respectively video sample.
The application uses OpenCV to display gesture recognition result and current inference performance.