Inference Engine Extensibility Mechanism

Inference Engine Extensibility API allows to add support of custom operations to the Inference Engine. Extension should contain operation sets with custom operations and execution kernels for custom operations. Physically, an extension library can be represented as a dynamic library exporting the single CreateExtension function that allows to create a new extension instance.

Extensibility library can be loaded to the InferenceEngine::Core object using the InferenceEngine::Core::AddExtension method.

Inference Engine Extension Library

Inference Engine Extension dynamic library contains several components:

NOTE: This documentation is written based on the Template extension, which demonstrates extension

development details. Find the complete code of the Template extension, which is fully compilable and up-to-date, at <dldt source tree>/docs/template_extension.

Execution Kernels

The Inference Engine workflow involves the creation of custom kernels and either custom or existing operations.

An Operation is a network building block implemented in the training framework, for example, Convolution in Caffe*. A Kernel is defined as the corresponding implementation in the Inference Engine.

Refer to the Model Optimizer Extensibility for details on how a mapping between framework operations and Inference Engine kernels is registered.

In short, you can plug your own kernel implementations into the Inference Engine and map them to the operations in the original framework.

The following pages describe how to integrate custom kernels into the Inference Engine:

Additional Resources

See Also