This section provides a high-level description of the process of integrating the Inference Engine into your application. Refer to the Using Inference Engine Samples section for examples of using the Inference Engine in applications.
libinference_engine.so library implements loading and parsing a model IR, and triggers inference using a specified plugin. The core library has the following API:
InferenceEngine::IInferencePlugin- The main plugin interface. Every Inference Engine plugin implements this method. You can use it through an
InferenceEngine::PluginDispatcher- This class allows find suitable plugin for specified device in given directories.
Integration process consists of the following steps:
InferenceEngine::InferenceEnginePluginPtr. You can specify the plugin or let Inference Engine to choose it using
InferenceEngine::PluginDispatcher. See the
selectPlugin()function in the samples.
InferenceEngine::CNNNetReaderand read a model IR:
InferenceEngine::CNNNetwork::getInputsInfo()methods. Set the input number format (precision) using
InferenceEngine::InputInfo::setInputPrecisionto match the input data format (precision). Allocate input blobs of the appropriate types and feed an image and the input data to the blobs:
InferenceEngine::CNNNetwork::getOutputsInfo()methods. Allocate output blobs of the appropriate types:
For details about building your application, refer to the CMake files for the sample applications. All samples reside in the samples directory in the Inference Engine installation directory.
Before running compiled binary files, make sure your application can find the Inference Engine libraries. On Linux* operating systems, including Ubuntu* and CentOS*, the
LD_LIBRARY_PATH environment variable is usually used to specify directories to be looked for libraries. You can update the
LD_LIBRARY_PATH with paths to the directories in the Inference Engine installation directory where the libraries reside.
Add a path the directory containing the core and plugin libraries:
Add paths the directories containing the required third-party libraries:
Alternatively, you can use the following scripts that reside in the Inference Engine directory of the OpenVINO™ toolkit and Intel® Deep Learning Deployment Toolkit installation folders respectively:
To run compiled applications on Microsoft* Windows* OS, make sure that Microsoft* Visual C++ 2015 Redistributable and Intel® C++ Compiler 2017 Redistributable packages are installed and
<INSTALL_DIR>/bin/intel64/Release/*.dll files are placed to the application folder or accessible via
PATH% environment variable.