Introduction to Inference Engine

After you have used the Model Optimizer to create an Intermediate Representation (IR), use the Inference Engine to infer input data.

The Inference Engine is a C++ library with a set of C++ classes to infer input data (images) and get a result. The C++ library provides an API to read the Intermediate Representation, set the input and output formats, and execute the model on devices.

To learn about how to use the Inference Engine API for your application, see the Integrating Inference Engine in Your Application documentation.

Complete API Reference is in the full offline package documentation:

  1. Go to <INSTALL_DIR>/deployment_tools/documentation/, where <INSTALL_DIR> is the OpenVINO toolkit installation directory.
  2. Open index.html in an Internet browser.
  3. Select API References from the menu at the top of the screen.
  4. From the API References page, select Inference Engine API References.

Inference Engine uses a plugin architecture. Inference Engine plugin is a software component that contains complete implementation for inference on a certain Intel® hardware device: CPU, GPU, VPU, FPGA, etc. Each plugin implements the unified API and provides additional hardware-specific APIs.

Modules in the Inference Engine component

Core Inference Engine Libraries

Your application must link to the core Inference Engine library:

The required C++ header files are located in the include directory.

This library contains the classes to:

Device-specific Plugin Libraries

For each supported target device, Inference Engine provides a plugin — a DLL/shared library that contains complete implementation for inference on this particular device. The following plugins are available:

Plugin Device Type
CPU Intel® Xeon® with Intel® AVX2 and AVX512, Intel® Core™ Processors with Intel® AVX2, Intel® Atom® Processors with Intel® SSE
GPU Intel® Processor Graphics, including Intel® HD Graphics and Intel® Iris® Graphics
FPGA Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA, Intel® Vision Accelerator Design with an Intel® Arria 10 FPGA (Speed Grade 1), Intel® Vision Accelerator Design with an Intel® Arria 10 FPGA (Speed Grade 2)
MYRIAD Intel® Movidius™ Neural Compute Stick powered by the Intel® Movidius™ Myriad™ 2, Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X
GNA Intel® Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel® Pentium® Silver processor J5005, Intel® Celeron® processor J4005, Intel® Core™ i3-8121U processor
HETERO Automatic splitting of a network inference between several devices (for example if a device doesn't support certain layers
MULTI Simultaneous inference of the same network on several devices in parallel

The table below shows the plugin libraries and dependencies for Linux and Windows platforms.

Plugin Library name for Linux Dependency libraries for Linux Library name for Windows Dependency libraries for Windows
CPU libMKLDNNPlugin.so libmklml_tiny.so, libiomp5md.so MKLDNNPlugin.dll mklml_tiny.dll, libiomp5md.dll
GPU libclDNNPlugin.so libclDNN64.so clDNNPlugin.dll clDNN64.dll
FPGA libdliaPlugin.so libdla_compiler_core.so, libdla_runtime_core.so dliaPlugin.dll dla_compiler_core.dll, dla_runtime_core.dll
MYRIAD libmyriadPlugin.so No dependencies myriadPlugin.dll No dependencies
HDDL libHDDLPlugin.so libbsl.so, libhddlapi.so, libmvnc-hddl.so HDDLPlugin.dll bsl.dll, hddlapi.dll, json-c.dll, libcrypto-1_1-x64.dll, libssl-1_1-x64.dll, mvnc-hddl.dll
GNA libGNAPlugin.so libgna_api.so GNAPlugin.dll gna.dll
HETERO libHeteroPlugin.so Same as for selected plugins HeteroPlugin.dll Same as for selected plugins
MULTI libMultiDevicePlugin.so Same as for selected plugins MultiDevicePlugin.dll Same as for selected plugins

Make sure those libraries are in your computer's path or in the place you pointed to in the plugin loader. Make sure each plugin's related dependencies are in the:

On Linux, use the script bin/setupvars.sh to set the environment variables.

On Windows, run the bin\setupvars.bat batch file to set the environment variables.

To learn more about supported devices and corresponding plugins, see the Supported Devices chapter.

Common Workflow for Using the Inference Engine API

The common workflow contains the following steps:

  1. Read the Intermediate Representation - Using the InferenceEngine::CNNNetReader class, read an Intermediate Representation file into an object of the InferenceEngine::CNNNetwork class. This class represents the network in the host memory.
  2. Prepare inputs and outputs format - After loading the network, specify input and output precision and the layout on the network. For these specification, use the InferenceEngine::CNNNetwork::getInputsInfo() and InferenceEngine::CNNNetwork::getOutputsInfo().
  3. Create Inference Engine Core object - Create an InferenceEngine::Core object to work with different devices, all device plugins are managed internally by the Core object. Pass per device loading configurations specific to this device (InferenceEngine::Core::SetConfig), and register extensions to this device (InferenceEngine::Core::AddExtension).
  4. Compile and Load Network to device - Use the InferenceEngine::Core::LoadNetwork() method with specific device (e.g. CPU, GPU, etc.) to compile and load the network on the device. Pass in the per-target load configuration for this compilation and load operation.
  5. Set input data - With the network loaded, you have an InferenceEngine::ExecutableNetwork object. Use this object to create an InferenceEngine::InferRequest in which you signal the input buffers to use for input and output. Specify a device-allocated memory and copy it into the device memory directly, or tell the device to use your application memory to save a copy.
  6. Execute - With the input and output memory now defined, choose your execution mode:
  7. Get the output - After inference is completed, get the output memory or read the memory you provided earlier. Do this with the InferenceEngine::IInferRequest::GetBlob() method.

Further Reading

For more details on the Inference Engine API, refer to the Integrating Inference Engine in Your Application documentation.