The demo shows an example of joint usage of several neural networks to detect three basic actions (sitting, standing, raising hand) and recognize people by faces in the classroom environment. The demo uses Async API for action and face detection networks. It allows to parallelize execution of face recognition and detection: while face recognition is running on one accelerator, face and action detection could be performed on another. You can use a set of the following pre-trained models with the demo:
face-detection-adas-0001
, which is a primary detection network for finding faces.landmarks-regression-retail-0009
, which is executed on top of the results from the first network and outputs a vector of facial landmarks for each detected face.face-reidentification-retail-0095
, which is executed on top of the results from the first network and outputs a vector of features for each detected face.person-detection-action-recognition-0005
, which is a detection network for finding persons and simultaneously predicting their current actions.person-detection-action-recognition-teacher-0002
, which is a detection network for finding persons and simultaneously predicting their current actions.For more information about the pre-trained models, refer to the https://github.com/opencv/open_model_zoo/blob/master/intel_models/index.md "Open Model Zoo" repository on GitHub*.
On the start-up, the application reads command line parameters and loads four networks to the Inference Engine for execution on different devices depending on -m...
options family. Upon getting a frame from the OpenCV VideoCapture, it performs inference of Face Detection and Action Detection networks. After that, the ROIs obtained by Face Detector are fed to the Facial Landmarks Regression network. Then landmarks are used to align faces by affine transform and feed them to the Face Recognition network. The recognized faces are matched with detected actions to find an action for a recognized person for each frame.
NOTE: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with
--reverse_input_channels
argument specified. For more information about the argument, refer to When to Specify Input Shapes section of Converting a Model Using General Conversion Parameters.
To recognize faces on a frame, the demo needs a gallery of reference images. Each image should contain a tight crop of face. You can create the gallery from an arbitrary list of images:
id_name.0.png, id_name.1.png, ...
.create_list.py <path_to_folder_with_images>
command to get a list of files and identities in .json
format.Running the application with the -h
option yields the following usage message:
Running the application with the empty list of options yields the usage message given above and an error message.
To run the demo, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO Model Downloader or go to https://download.01.org/opencv/.
NOTE: Before running the demo with a trained model, make sure the model is converted to the Inference Engine format (*.xml + *.bin) using the Model Optimizer tool.
Example of a valid command line to run the application with pre-trained models for recognizing students actions:
NOTE: To recognize actions of students, use
person-detection-action-recognition-0003
model.
Example of a valid command line to run the application for recognizing actions of a teacher:
NOTE: To recognize actions of a teacher, use
person-detection-action-recognition-teacher-0001
model.
The demo uses OpenCV to display the resulting frame with labeled actions and faces.