MediaPipe Holistic Demo

This guide shows how to implement MediaPipe graph using OVMS.

Example usage of graph that accepts Mediapipe::ImageFrame as a input:

The demo is based on the upstream Mediapipe holistic demo.

Prepare the server deployment

Clone the repository and enter mediapipe holistic_tracking directory

git clone https://github.com/openvinotoolkit/model_server.git
cd model_server/demos/mediapipe/holistic_tracking

./prepare_server.sh

The models setup should look like this

ovms
├── config_holistic.json
├── face_detection_short_range
│   └── 1       └── face_detection_short_range.tflite
├── face_landmark
│   └── 1       └── face_landmark.tflite
├── hand_landmark_full
│   └── 1       └── hand_landmark_full.tflite
├── hand_recrop
│   └── 1       └── hand_recrop.tflite
├── holistic_tracking.pbtxt
├── palm_detection_full
│   └── 1       └── palm_detection_full.tflite
├── pose_detection
│   └── 1       └── pose_detection.tflite
└── pose_landmark_full
    └── 1
        └── pose_landmark_full.tflite

Pull the Latest Model Server Image

Pull the latest version of OpenVINO™ Model Server from Docker Hub:

docker pull openvino/model_server:latest

Run OpenVINO Model Server

docker run -d -v $PWD/mediapipe:/mediapipe -v $PWD/ovms:/models -p 9000:9000 openvino/model_server:latest --config_path /models/config_holistic.json --port 9000

Run client application for holistic tracking - default demo

pip install -r requirements.txt
# download a sample image for analysis
curl -kL -o girl.jpeg https://cdn.pixabay.com/photo/2019/03/12/20/39/girl-4051811_960_720.jpg
echo "girl.jpeg" > input_images.txt
# launch the client
python mediapipe_holistic_tracking.py --grpc_port 9000 --images_list input_images.txt
Running demo application.
Start processing:
        Graph name: holisticTracking
(640, 960, 3)
Iteration 0; Processing time: 131.45 ms; speed 7.61 fps
Results saved to :image_0.jpg

Output image

output

Real time stream analysis

For demo featuring real time stream application see real_time_stream_analysis