Inference Engine Extensibility Mechanism¶
If your model contains operations not normally supported by OpenVINO, the Inference Engine Extensibility API lets you add support for those custom operations in a library containing custom nGraph operation sets, corresponding extensions to the Model Optimizer, and a device plugin extension. See the overview in the Custom Operations Guide to learn how these work together.
To load the Extensibility library to the InferenceEngine::Core
object, use the InferenceEngine::Core::AddExtension
method.
Inference Engine Extension Library¶
An Inference Engine Extension dynamic library contains the following components:
-
Contains custom operation sets
Provides CPU implementations for custom operations
-
Enables the use of
InferenceEngine::Core::ReadNetwork
to read Intermediate Representation (IR) with unsupported operationsEnables the creation of
ngraph::Function
with unsupported operationsProvides a shape inference mechanism for custom operations
Note
This documentation is written based on the Template extension, which demonstrates extension development details. You can review the complete code, which is fully compilable and up-to-date, to see how it works.
Execution Kernels¶
The Inference Engine workflow involves the creation of custom kernels and either custom or existing operations.
An operation is a network building block implemented in the training framework, for example, Convolution
in Caffe*. A kernel is defined as the corresponding implementation in the Inference Engine.
Refer to the Model Optimizer Extensibility for details on how a mapping between framework operations and Inference Engine kernels is registered.
In short, you can plug your own kernel implementations into the Inference Engine and map them to the operations in the original framework.
The following pages describe how to integrate custom kernels into the Inference Engine: