Classification Example with a PaddlePaddle Model#

Overview#

This guide demonstrates how to run inference requests for PaddlePaddle model with OpenVINO Model Server. As an example, we will use MobileNetV3_large_x1_0_infer to perform classification on an image.

Prerequisites#

Model preparation: Python 3.9 or higher with pip

Model Server deployment: Installed Docker Engine or OVMS binary package according to the baremetal deployment guide

Preparing to Run#

Clone the repository and enter classification_using_paddlepaddle_model directory

git clone https://github.com/openvinotoolkit/model_server.git
cd model_server/demos/classification_using_paddlepaddle_model/python

You can download the model and prepare the workspace by just running:

python download_model.py

Server Deployment#

Deploying with Docker

Deploy OVMS with vehicles analysis pipeline using the following command:

docker run -p 9000:9000 -d -v ${PWD}/model:/models openvino/model_server --port 9000 --model_path /models --model_name mobilenet --shape "(1,3,-1,-1)"
Deploying on Bare Metal

Assuming you have unpacked model server package, make sure to:

  • On Windows: run setupvars script

  • On Linux: set LD_LIBRARY_PATH and PATH environment variables

as mentioned in deployment guide, in every new shell that will start OpenVINO Model Server.

cd demos\classification_using_paddlepaddle_model\python
ovms --port 9000 --model_path model --model_name mobilenet --shape "(1,3,-1,-1)"

Requesting the Service#

Install python dependencies:

pip3 install -r requirements.txt

Now you can run the client:

python classification_using_paddlepaddle_model.py --grpc_port 9000 --image_input_path coco.jpg

Exemplary result of running the demo:

probability: 0.74 => Labrador_retriever
probability: 0.05 => Staffordshire_bullterrier
probability: 0.05 => flat-coated_retriever
probability: 0.03 => kelpie
probability: 0.01 => schipperke

Coco