Quickstart Guide

OpenVINO Model Server can perform inference using pre-trained models in either OpenVINO IR, ONNX, PaddlePaddle or TensorFlow format. You can get them by:

This guide uses a face detection model in IR format.

To quickly start using OpenVINO™ Model Server follow these steps:

  1. Prepare Docker

  2. Download or build the OpenVINO™ Model server

  3. Provide a model

  4. Start the Model Server Container

  5. Prepare the Example Client Components

  6. Download data for inference

  7. Run inference

  8. Review the results

Step 1: Prepare Docker

Install Docker Engine, including its post-installation steps, on your development system. To verify installation, test it using the following command. If it displays a test image and a message, it is ready.

$ docker run hello-world

Step 2: Download the Model Server

Download the Docker image that contains OpenVINO Model Server:

docker pull openvino/model_server:latest

Step 3: Provide a Model

Store components of the model in the model/1 directory. Here is an example command using curl and a face detection model:

mkdir -p model/1
wget https://storage.googleapis.com/tfhub-modules/tensorflow/faster_rcnn/resnet50_v1_640x640/1.tar.gz
tar xzf 1.tar.gz -C model/1

OpenVINO Model Server expects a particular folder structure for models - in this case model directory has the following content:

model
└── 1
    ├── saved_model.pb
    └── variables
        ├── variables.data-00000-of-00001
        └── variables.index

Sub-folder 1 indicates the version of the model. If you want to upgrade the model, other versions can be added in separate subfolders (2,3…). For more information about the directory structure and how to deploy multiple models at a time, check out the following sections:

Step 4: Start the Model Server Container

Start the container:

docker run -d -u $(id -u) --rm -v ${PWD}/model:/model -p 9000:9000 openvino/model_server:latest --model_name faster_rcnn --model_path /model --port 9000

During this step, the model folder is mounted to the Docker container. This folder will be used as the model storage.

Step 5: Prepare the Example Client Components

Client scripts are available for quick access to the Model Server. Run an example command to download all required components:

wget https://raw.githubusercontent.com/openvinotoolkit/model_server/main/demos/object_detection/python/object_detection.py
wget https://raw.githubusercontent.com/openvinotoolkit/model_server/main/demos/object_detection/python/requirements.txt
wget https://raw.githubusercontent.com/openvinotoolkit/open_model_zoo/master/data/dataset_classes/coco_91cl.txt

For more information, check these links:

Step 6: Download Data for Inference

Put the files in a separate folder to provide inference data, as inference will be performed on all the files it contains.

You can download example images for inference. This example uses the file coco_bike.jpg. Run the following command to download the image:

wget https://storage.openvinotoolkit.org/repositories/openvino_notebooks/data/data/image/coco_bike.jpg

Step 7: Run Inference

Go to the folder with the client script and install dependencies. Create a folder for inference results and run the client script:

pip install --upgrade pip
pip install -r requirements.txt

python3 object_detection.py --image coco_bike.jpg --output output.jpg --service_url localhost:9000

Step 8: Review the Results

In the current folder, you can find files containing inference results. In our case, it will be a modified input image with bounding boxes indicating detected faces.

Inference results

Note : Similar steps can be performed with an ONNX model. Check the inference use case example with a public ResNet model in ONNX format

or TensorFlow model demo.

Congratulations, you have completed the Quickstart guide. Try Model Server demos or explore more features to create your application.