OpenVINO™ Model Server Benchmark Results

OpenVINO™ Model Server is an open-source, production-grade inference platform that exposes a set of models via a convenient inference API over gRPC or HTTP/REST. It employs the OpenVINO™ Runtime libraries from the Intel® Distribution of OpenVINO™ toolkit to extend workloads across Intel® hardware including CPU, GPU and others.

OpenVINO™ Model Server

Measurement Methodology

OpenVINO™ Model Server is measured in multiple-client-single-server configuration using two hardware platforms connected by ethernet network. The network bandwidth depends on the platforms as well as models under investigation and it is set to not be a bottleneck for workload intensity. This connection is dedicated only to the performance measurements. The benchmark setup is consists of four main parts:

OVMS Benchmark Setup Diagram
  • OpenVINO™ Model Server is launched as a docker container on the server platform and it listens (and answers on) requests from clients. OpenVINO™ Model Server is run on the same machine as the OpenVINO™ toolkit benchmark application in corresponding benchmarking. Models served by OpenVINO™ Model Server are located in a local file system mounted into the docker container. The OpenVINO™ Model Server instance communicates with other components via ports over a dedicated docker network.

  • Clients are run in separated physical machine referred to as client platform. Clients are implemented in Python3 programming language based on TensorFlow* API and they work as parallel processes. Each client waits for a response from OpenVINO™ Model Server before it will send a new next request. The role played by the clients is also verification of responses.

  • Load balancer works on the client platform in a docker container. HAProxy is used for this purpose. Its main role is counting of requests forwarded from clients to OpenVINO™ Model Server, estimating its latency, and sharing this information by Prometheus service. The reason of locating the load balancer on the client site is to simulate real life scenario that includes impact of physical network on reported metrics.

  • Execution Controller is launched on the client platform. It is responsible for synchronization of the whole measurement process, downloading metrics from the load balancer, and presenting the final report of the execution.

bert-small-uncased-whole-word-masking-squad-002 (INT8)

_images/bert-small-uncased-whole-word-masking-squad-002-int8.png

bert-small-uncased-whole-word-masking-squad-002 (FP32)

_images/bert-small-uncased-whole-word-masking-squad-002-fp32.png

densenet-121 (INT8)

_images/densenet-121-int8.png

densenet-121 (FP32)

_images/densenet-121-fp32.png

efficientdet-d0 (INT8)

_images/efficientdet-d0-int8.png

efficientdet-d0 (FP32)

_images/efficientdet-d0-fp32.png

inception-v4 (INT8)

_images/inception-v4-int8.png

inception-v4 (FP32)

_images/inception-v4-fp32.png

mobilenet-ssd (INT8)

_images/mobilenet-ssd-int8.png

mobilenet-ssd (FP32)

_images/mobilenet-ssd-fp32.png

mobilenet-v2 (INT8)

_images/mobilenet-v2-int8.png

mobilenet-v2 (FP32)

_images/mobilenet-v2-fp32.png

resnet-18 (INT8)

_images/resnet-18-int8.png

resnet-18 (FP32)

_images/resnet-18-fp32.png

resnet-50 (INT8)

_images/resnet-50-int8.png

resnet-50 (FP32)

_images/resnet-50-fp32.png

ssd-resnt34-1200 (INT8)

_images/ssd-resnt34-1200-int8.png

ssd-resnt34-1200 (FP32)

_images/ssd-resnt34-1200-fp32.png

unet-camvid-onnx-001 (INT8)

_images/unet-camvid-onnx-001-int8.png

unet-camvid-onnx-001 (FP32)

_images/unet-camvid-onnx-001-fp32.png

yolo-v3-tiny (INT8)

_images/yolo-v3-tiny-int8.png

yolo-v3-tiny (FP32)

_images/yolo-v3-tiny-fp32.png

yolo-v4 (INT8)

_images/yolo-v4-int8.png

yolo-v4 (FP32)

_images/yolo-v4-fp32.png

Platform Configurations

OpenVINO™ Model Server performance benchmark numbers are based on release 2022.2. Performance results are based on testing as of November 16, 2022 and may not reflect all publicly available updates.