OpenVINO™ Model Server Benchmark Results

OpenVINO™ Model Server is an open-source, production-grade inference platform that exposes a set of models via a convenient inference API over gRPC or HTTP/REST. It employs the OpenVINO™ Runtime libraries from the Intel® Distribution of OpenVINO™ toolkit to extend workloads across Intel® hardware including CPU, GPU and others.

OpenVINO™ Model Server

Measurement Methodology

OpenVINO™ Model Server is measured in multiple-client-single-server configuration using two hardware platforms connected by ethernet network. The network bandwidth depends on the platforms as well as models under investigation and it is set to not be a bottleneck for workload intensity. This connection is dedicated only to the performance measurements. The benchmark setup is consists of four main parts:

OVMS Benchmark Setup Diagram
  • OpenVINO™ Model Server is launched as a docker container on the server platform and it listens (and answers on) requests from clients. OpenVINO™ Model Server is run on the same machine as the OpenVINO™ toolkit benchmark application in corresponding benchmarking. Models served by OpenVINO™ Model Server are located in a local file system mounted into the docker container. The OpenVINO™ Model Server instance communicates with other components via ports over a dedicated docker network.

  • Clients are run in separated physical machine referred to as client platform. Clients are implemented in Python3 programming language based on TensorFlow* API and they work as parallel processes. Each client waits for a response from OpenVINO™ Model Server before it will send a new next request. The role played by the clients is also verification of responses.

  • Load balancer works on the client platform in a docker container. HAProxy is used for this purpose. Its main role is counting of requests forwarded from clients to OpenVINO™ Model Server, estimating its latency, and sharing this information by Prometheus service. The reason of locating the load balancer on the client site is to simulate real life scenario that includes impact of physical network on reported metrics.

  • Execution Controller is launched on the client platform. It is responsible for synchronization of the whole measurement process, downloading metrics from the load balancer, and presenting the final report of the execution.

bert-small-uncased-whole-word-masking-squad-002 (INT8)


bert-small-uncased-whole-word-masking-squad-002 (FP32)


densenet-121 (INT8)


densenet-121 (FP32)


efficientdet-d0 (INT8)


efficientdet-d0 (FP32)


inception-v4 (INT8)


inception-v4 (FP32)


mobilenet-ssd (INT8)


mobilenet-ssd (FP32)


mobilenet-v2 (INT8)


mobilenet-v2 (FP32)


resnet-18 (INT8)


resnet-18 (FP32)


resnet-50 (INT8)


resnet-50 (FP32)


ssd-resnt34-1200 (INT8)


ssd-resnt34-1200 (FP32)


unet-camvid-onnx-001 (INT8)


unet-camvid-onnx-001 (FP32)


yolo-v3-tiny (INT8)


yolo-v3-tiny (FP32)


yolo-v4 (INT8)


yolo-v4 (FP32)


Platform Configurations

OpenVINO™ Model Server performance benchmark numbers are based on release 2022.2. Performance results are based on testing as of November 16, 2022 and may not reflect all publicly available updates.