This section provides a high-level description of the process of integrating the Inference Engine into your application. Refer to the Using Inference Engine Samples section for examples of using the Inference Engine in applications.
The core libinference_engine.so
library implements loading and parsing a model IR, and triggers inference using a specified plugin. The core library has the following API:
InferenceEngine::IInferencePlugin
- The main plugin interface. Every Inference Engine plugin implements this method. You can use it through an InferenceEngine::InferenceEnginePluginPtr
instance.InferenceEngine::PluginDispatcher
- This class allows find suitable plugin for specified device in given directories.InferenceEngine::CNNNetReader
InferenceEngine::CNNNetwork
InferenceEngine::Blob
, InferenceEngine::TBlob
InferenceEngine::BlobMap
InferenceEngine::InputInfo
, InferenceEngine::InputsDataMap
Integration process consists of the following steps:
InferenceEngine::InferenceEnginePluginPtr
. You can specify the plugin or let Inference Engine to choose it using InferenceEngine::PluginDispatcher
. See the selectPlugin()
function in the samples. InferenceEngine::CNNNetReader
and read a model IR: InferenceEngine::CNNNetReader::getNetwork()
and InferenceEngine::CNNNetwork::getInputsInfo()
methods. Set the input number format (precision) using InferenceEngine::InputInfo::setInputPrecision
to match the input data format (precision). Allocate input blobs of the appropriate types and feed an image and the input data to the blobs: InferenceEngine::CNNNetReader::getNetwork()
and InferenceEngine::CNNNetwork::getOutputsInfo()
methods. Allocate output blobs of the appropriate types: InferenceEngine::IInferencePlugin::LoadNetwork()
: InferenceEngine::IInferencePlugin::Infer
method: For details about building your application, refer to the CMake files for the sample applications. All samples reside in the samples directory in the Inference Engine installation directory.
Before running compiled binary files, make sure your application can find the Inference Engine libraries. On Linux* operating systems, including Ubuntu* and CentOS*, the LD_LIBRARY_PATH
environment variable is usually used to specify directories to be looked for libraries. You can update the LD_LIBRARY_PATH
with paths to the directories in the Inference Engine installation directory where the libraries reside.
Add a path the directory containing the core and plugin libraries:
Add paths the directories containing the required third-party libraries:
Alternatively, you can use the following scripts that reside in the Inference Engine directory of the OpenVINO™ toolkit and Intel® Deep Learning Deployment Toolkit installation folders respectively:
/opt/intel/computer_vision_sdk_<version>/bin/setupvars.sh
/opt/intel/deep_learning_sdk_<version>/deployment_tools/inference_engine/bin/setvars.sh
To run compiled applications on Microsoft* Windows* OS, make sure that Microsoft* Visual C++ 2015 Redistributable and Intel® C++ Compiler 2017 Redistributable packages are installed and <INSTALL_DIR>/bin/intel64/Release/*.dll
files are placed to the application folder or accessible via PATH%
environment variable.