Libraries for Local Distribution

With local distribution, each C or C++ application/installer has its own copies of OpenVINO Runtime binaries. However, OpenVINO has a scalable plugin-based architecture, which means that some components can be loaded in runtime only when they are really needed. This guide helps you understand what minimal set of libraries is required to deploy the application.

Local distribution is also appropriate for OpenVINO binaries built from sources using Build instructions, but the guide below supposes OpenVINO Runtime is built dynamically. For case of Static OpenVINO Runtime select the required OpenVINO capabilities on CMake configuration stage using CMake Options for Custom Compilation, the build and link the OpenVINO components into the final application.

Note

The steps below are independent of the operating system and refer to the library file name without any prefixes (like lib on Unix systems) or suffixes (like .dll on Windows OS). Do not put .lib files on Windows OS to the distribution because such files are needed only at a linker stage.

Library Requirements for C++ and C Languages

Regardless of the programming language of an application, the openvino library must always be included in its final distribution. This core library manages all inference and frontend plugins. The openvino library depends on the TBB libraries which are used by OpenVINO Runtime to optimally saturate devices with computations.

If your application is in C language, you need to additionally include the openvino_c library.

The plugins.xml file with information about inference devices must also be taken as a support file for openvino.

Libraries for Pluggable Components

The picture below presents dependencies between the OpenVINO Runtime core and pluggable libraries:

_images/deployment_full.svg

Libraries for Compute Devices

For each inference device, OpenVINO Runtime has its own plugin library:

Depending on which devices are used in the app, the corresponding libraries should be included in the distribution package.

As shown in the picture above, some plugin libraries may have OS-specific dependencies which are either backend libraries or additional supports files with firmware, etc. Refer to the table below for details:

Device

Dependency

Location

CPU

GPU

OpenCL.dll
cache.json
C:\Windows\System32\opencl.dll
.\runtime\bin\intel64\Release\cache.json or
.\runtime\bin\intel64\Debug\cache.json

GNA

gna.dll

.\runtime\bin\intel64\Release\gna.dll or
.\runtime\bin\intel64\Debug\gna.dll

Arm® CPU

Device

Dependency

Location

Arm® CPU

Device

Dependency

Location

CPU

GPU

libOpenCL.so
cache.json
/usr/lib/x86_64-linux-gnu/libOpenCL.so.1
./runtime/lib/intel64/cache.json

GNA

libgna.so

./runtime/lib/intel64/libgna.so.3

Device

Dependency

Location

Arm® CPU

Device

Dependency

Location

CPU

Libraries for Execution Modes

The HETERO, MULTI, BATCH and AUTO execution modes can also be used by the application explicitly or implicitly. Use the following recommendation scheme to decide whether to add the appropriate libraries to the distribution package:

  • If AUTO is used explicitly in the application or ov::Core::compile_model is used without specifying a device, put openvino_auto_plugin to the distribution.

    Note

    Automatic Device Selection relies on inference device plugins. If you are not sure which inference devices are available on the target system, put all inference plugin libraries in the distribution. If ov::device::priorities is used for AUTO to specify a limited device list, grab the corresponding device plugins only.

  • If MULTI is used explicitly, put openvino_auto_plugin in the distribution.

  • If HETERO is either used explicitly or ov::hint::performance_mode is used with GPU, put openvino_hetero_plugin in the distribution.

  • If BATCH is either used explicitly or ov::hint::performance_mode is used with GPU, put openvino_batch_plugin in the distribution.

Frontend Libraries for Reading Models

OpenVINO Runtime uses frontend libraries dynamically to read models in different formats:

  • openvino_ir_frontend is used to read OpenVINO IR.

  • openvino_tensorflow_frontend is used to read the TensorFlow file format.

  • openvino_tensorflow_lite_frontend is used to read the TensorFlow Lite file format.

  • openvino_onnx_frontend is used to read the ONNX file format.

  • openvino_paddle_frontend is used to read the Paddle file format.

Depending on the model format types that are used in the application in ov::Core::read_model, select the appropriate libraries.

Note

To optimize the size of the final distribution package, it is recommended to convert models to OpenVINO IR by using model conversion API. This way you do not have to keep TensorFlow, TensorFlow Lite, ONNX, PaddlePaddle, and other frontend libraries in the distribution package.

(Legacy) Preprocessing via G-API

Note

G-API preprocessing is a legacy functionality, use preprocessing capabilities from OpenVINO 2.0 which do not require any additional libraries.

If the application uses InferenceEngine::PreProcessInfo::setColorFormat or InferenceEngine::PreProcessInfo::setResizeAlgorithm methods, OpenVINO Runtime dynamically loads openvino_gapi_preproc plugin to perform preprocessing via G-API.

Examples

CPU + OpenVINO IR in C application

In this example, the application is written in C, performs inference on CPU, and reads models stored in the OpenVINO IR format.

The following libraries are used: openvino_c, openvino, openvino_intel_cpu_plugin, and openvino_ir_frontend.

  • The openvino_c library is a main dependency of the application. The app links against this library.

  • The openvino library is used as a private dependency for openvino_c and is also used in the deployment.

  • openvino_intel_cpu_plugin is used for inference.

  • openvino_ir_frontend is used to read source models.

MULTI execution on GPU and CPU in `tput` mode

In this example, the application is written in C++, performs inference simultaneously on GPU and CPU devices with the ov::hint::PerformanceMode::THROUGHPUT property set, and reads models stored in the ONNX format.

The following libraries are used: openvino, openvino_intel_gpu_plugin, openvino_intel_cpu_plugin, openvino_auto_plugin, openvino_auto_batch_plugin, and openvino_onnx_frontend.

  • The openvino library is a main dependency of the application. The app links against this library.

  • openvino_intel_gpu_plugin and openvino_intel_cpu_plugin are used for inference.

  • openvino_auto_plugin is used for Multi-Device Execution.

  • openvino_auto_batch_plugin can be also put in the distribution to improve the saturation of Intel® GPU device. If there is no such plugin, Automatic Batching is turned off.

  • openvino_onnx_frontend is used to read source models.

Auto-Device Selection between GPU and CPU

In this example, the application is written in C++, performs inference with the Automatic Device Selection mode, limiting device list to GPU and CPU, and reads models created using C++ code.

The following libraries are used: openvino, openvino_auto_plugin, openvino_intel_gpu_plugin, and openvino_intel_cpu_plugin.

  • The openvino library is a main dependency of the application. The app links against this library.

  • openvino_auto_plugin is used to enable Automatic Device Selection.

  • openvino_intel_gpu_plugin and openvino_intel_cpu_plugin are used for inference. AUTO selects between CPU and GPU devices according to their physical existence on the deployed machine.

  • No frontend library is needed because ov::Model is created in code.