Installation & Deployment

“Easy to use” is one of the main concepts for OpenVINO™ API 2.0. It is about simplifying migration from different frameworks to OpenVINO, but also touches on how OpenVINO is organized, how its development tools are used, and how OpenVINO-based applications are developed and deployed.

To accomplish that, we made some changes to the installation and deployment processes of OpenVINO in the 2022.1 release. This guide will walk you through them.

The Installer Package Contains OpenVINO™ Runtime Only

Starting from OpenVINO 2022.1, development tools are distributed via PyPI only and are no longer included in the OpenVINO installer package. For a list of these components, refer to the installation overview. This approach has several benefits:

  • simplifies the user experience - in previous versions, installation and usage of OpenVINO Development Tools differed from one distribution type to another (the OpenVINO installer vs. PyPI),

  • ensures that dependencies are handled properly via the PIP package manager and support virtual environments of development tools.

The structure of the OpenVINO 2022.1 installer package has been organized as follows:

  • The runtime folder includes headers, libraries and CMake interfaces.

  • The tools folder contains the compile tool, deployment manager, and a set of requirements.txt files with links to the corresponding versions of the openvino-dev package.

  • The python folder contains the Python version for OpenVINO Runtime.

Installing OpenVINO Development Tools via PyPI

Since OpenVINO Development Tools is no longer in the installer package, the installation process has also changed. This section describes it through a comparison with previous versions.

For Versions Prior to 2022.1

In previous versions, OpenVINO Development Tools was a part of the main package. After the package was installed, to convert models (for example, TensorFlow), you needed to install additional dependencies by using the requirement files, such as requirements_tf.txt, install Post-Training Optimization tool and Accuracy Checker tool via the setup.py scripts, and then use the setupvars scripts to make the tools available to the following command:

$ mo.py -h

For 2022.1 and After

In OpenVINO 2022.1 and later, you can install the development tools from a PyPI repository only, using the following command (taking TensorFlow as an example):

$ python3 -m pip install -r <INSTALL_DIR>/tools/requirements_tf.txt

This will install all the development tools and additional components necessary to work with TensorFlow via the openvino-dev package (see Step 4. Install the Package on the PyPI page for parameters of other frameworks).

Then, the tools can be used by commands like:

$ mo -h
$ pot -h

You don’t have to install any other dependencies. For more details on the installation steps, see Install OpenVINO Development Tools.

Interface Changes for Building C/C++ Applications

The new OpenVINO Runtime with its API 2.0 has also brought some changes for building C/C++ applications.

CMake Interface

The CMake interface has been changed as follows:

With Inference Engine of previous versions :

find_package(InferenceEngine REQUIRED)
find_package(ngraph REQUIRED)
add_executable(ie_ngraph_app main.cpp)
target_link_libraries(ie_ngraph_app PRIVATE ${InferenceEngine_LIBRARIES} ${NGRAPH_LIBRARIES})

With OpenVINO Runtime 2022.1 (API 2.0) :

find_package(OpenVINO REQUIRED)
add_executable(ov_app main.cpp)
target_link_libraries(ov_app PRIVATE openvino::runtime)

add_executable(ov_c_app main.c)
target_link_libraries(ov_c_app PRIVATE openvino::runtime::c)

Native Interfaces

To build applications without the CMake interface, you can also use MSVC IDE, UNIX makefiles, and any other interface, which has been changed as shown here:

With Inference Engine of previous versions :

<INSTALL_DIR>/deployment_tools/inference_engine/include
<INSTALL_DIR>/deployment_tools/ngraph/include
<INSTALL_DIR>/deployment_tools/inference_engine/lib/intel64/Release
<INSTALL_DIR>/deployment_tools/ngraph/lib/
// UNIX systems
inference_engine.so ngraph.so

// Windows
inference_engine.dll ngraph.dll
ngraph.lib
inference_engine.lib

With OpenVINO Runtime 2022.1 (API 2.0) :

<INSTALL_DIR>/runtime/include
<INSTALL_DIR>/runtime/lib/intel64/Release
// UNIX systems
openvino.so

// Windows
openvino.dll
openvino.lib

Clearer Library Structure for Deployment

OpenVINO 2022.1 introduced a reorganization of the libraries, to make deployment easier. In the previous versions, to perform deployment steps, you had to use several libraries. Now you can just use openvino or openvino_c based on your developing language, together with the necessary plugins to complete your task. For example, openvino_intel_cpu_plugin and openvino_ir_frontend plugins will enable you to load OpenVINO IRs and perform inference on the CPU device (see Local distribution with OpenVINO for more details).

Here you can find detailed comparisons on the library structure between OpenVINO 2022.1 and the previous versions:

  • A single core library with all the functionalities (openvino for C++ Runtime, openvino_c for Inference Engine API C interface) is used in 2022.1, instead of the previous core libraries which contained inference_engine, ngraph, inference_engine_transformations and inference_engine_lp_transformations.

  • The optional inference_engine_preproc preprocessing library (if InferenceEngine::PreProcessInfo::setColorFormat or InferenceEngine::PreProcessInfo::setResizeAlgorithm is used) has been renamed to openvino_gapi_preproc and deprecated in 2022.1. See more details on Preprocessing capabilities of OpenVINO API 2.0.

  • The libraries of plugins have been renamed as follows:

    • openvino_intel_cpu_plugin is used for CPU device instead of MKLDNNPlugin.

    • openvino_intel_gpu_plugin is used for GPU device instead of clDNNPlugin.

    • openvino_auto_plugin is used for Auto-Device Plugin.

  • The plugins for reading and converting models have been changed as follows:

    • openvino_ir_frontend is used to read IRs instead of inference_engine_ir_reader.

    • openvino_onnx_frontend is used to read ONNX models instead of inference_engine_onnx_reader (with its dependencies).

    • openvino_paddle_frontend is added in 2022.1 to read PaddlePaddle models.