Model Server demo with a direct import of TensorFlow model

This guide demonstrates how to run inference requests for TensorFlow model with OpenVINO Model Server. As an example, we will use InceptionResNetV2 to perform classification of an image.


  • Docker installed

  • Python 3.6 or newer installed

Preparing to Run

Clone the repository and enter image_classification_using_tf_model directory

git clone
cd model_server/demos/image_classification_using_tf_model/python

Download the InceptionResNetV2 model

mkdir -p model/1
wget -P model/1
tar xzf model/1/inception_resnet_v2_2018_04_27.tgz -C model/1

Run Openvino Model Server

docker run -d -v $PWD/model:/models -p 9000:9000 openvino/model_server:latest --model_path /models --model_name resnet --port 9000

Run the client

Install python dependencies:

pip3 install -r requirements.txt

Now you can run the client:

python3 --help
usage: [-h] [--grpc_address GRPC_ADDRESS] [--grpc_port GRPC_PORT] --image_input_path IMAGE_INPUT_PATH

Client for OCR pipeline

optional arguments:
  -h, --help            show this help message and exit
  --grpc_address GRPC_ADDRESS
                        Specify url to grpc service. default:localhost
  --grpc_port GRPC_PORT
                        Specify port to grpc service. default: 9000
  --image_input_path IMAGE_INPUT_PATH
                        Image input path

Exemplary result of running the demo:

python3 --grpc_port 9000 --image_input_path ../../common/static/images/zebra.jpeg
Image classified as zebra