Inference Engine Plugin usually represents a wrapper around a backend. Backends can be:
- OpenCL-like backend (e.g. clDNN library) for GPU devices.
- MKLDNN backend for Intel CPU devices.
- NVIDIA cuDNN for NVIDIA GPUs.
The responsibility of Inference Engine Plugin:
- Initializes a backend and throw exception in
Engine
constructor if backend cannot be initialized.
- Provides information about devices enabled by a particular backend, e.g. how many devices, their properties and so on.
- Loads or imports executable network objects.
In addition to the Inference Engine Public API, the Inference Engine provides the Plugin API, which is a set of functions and helper classes that simplify new plugin development:
- header files in the
inference_engine/src/plugin_api
directory
- implementations in the
inference_engine/src/inference_engine
directory
- symbols in the Inference Engine Core shared library
To build an Inference Engine plugin with the Plugin API, see the Inference Engine Plugin Building guide.
Plugin Class
Inference Engine Plugin API provides the helper InferenceEngine::InferencePluginInternal class recommended to use as a base class for a plugin. Based on that, declaration of a plugin class can look as follows:
namespace TemplatePlugin {
public:
using Ptr = std::shared_ptr<Plugin>;
Plugin();
~Plugin() override;
void SetConfig(const std::map<std::string, std::string> &config) override;
const std::map<std::string, std::string>& config) const override;
const std::map<std::string, std::string> &config) override;
InferenceEngine::Parameter GetConfig(
const std::string& name,
const std::map<std::string, InferenceEngine::Parameter> & options)
const override;
InferenceEngine::Parameter GetMetric(
const std::string& name,
const std::map<std::string, InferenceEngine::Parameter> & options)
const override;
private:
friend class ExecutableNetwork;
friend class TemplateInferRequest;
std::shared_ptr<ngraph::runtime::Backend> _backend;
Configuration _cfg;
};
}
std::shared_ptr< ExecutableNetworkInternal > Ptr
A shared pointer to ExecutableNetworkInternal object.
Definition: ie_executable_network_internal.hpp:32
std::shared_ptr< ITaskExecutor > Ptr
Definition: ie_itask_executor.hpp:51
Optimal implementation of IInferencePlugin interface to avoid duplication in all plugins.
Definition: ie_plugin_internal.hpp:49
std::shared_ptr< IExtension > IExtensionPtr
Class Fields
The provided plugin class also has several fields:
_backend
- a backend engine that is used to perform actual computations for network inference. For Template
plugin ngraph::runtime::Backend
is used which performs computations using ngraph reference implementations.
_waitExecutor
- a task executor that waits for a response from a device about device tasks completion.
_cfg
of type Configuration
:
using ConfigMap = std::map<std::string, std::string>;
struct Configuration {
Configuration();
Configuration(const Configuration&) = default;
Configuration(Configuration&&) = default;
Configuration& operator=(const Configuration&) = default;
Configuration& operator=(Configuration&&) = default;
explicit Configuration(const ConfigMap& config, const Configuration & defaultCfg = {}, const bool throwOnUnsupported = true);
int deviceId = 0;
bool perfCount = true;
};
Defines IStreamsExecutor configuration.
Definition: ie_istreams_executor.hpp:50
As an example, a plugin configuration has three value parameters:
deviceId
- particular device ID to work with. Applicable if a plugin supports more than one Template
device. In this case, some plugin methods, like SetConfig
, QueryNetwork
, and LoadNetwork
, must support the CONFIG_KEY(KEY_DEVICE_ID) parameter.
perfCounts
- boolean value to identify whether to collect performance counters during Inference Request execution.
_streamsExecutorConfig
- configuration of InferenceEngine::IStreamsExecutor
to handle settings of multi-threaded context.
Engine Constructor
A plugin constructor must contain code that checks the ability to work with a device of the Template
type. For example, if some drivers are required, the code must check driver availability. If a driver is not available (for example, OpenCL runtime is not installed in case of a GPU device or there is an improper version of a driver is on a host machine), an exception must be thrown from a plugin constructor.
A plugin must define a device name enabled via the _pluginName
field of a base class:
Plugin::Plugin() {
_pluginName = "TEMPLATE";
ngraph::runtime::Backend::set_backend_shared_library_search_directory("");
_backend = ngraph::runtime::Backend::create("INTERPRETER");
}
static ExecutorManager * getInstance()
Returns a global instance of ExecutorManager.
LoadExeNetworkImpl()
Implementation details: The base InferenceEngine::InferencePluginInternal class provides a common implementation of the public InferenceEngine::InferencePluginInternal::LoadNetwork method that calls plugin-specific LoadExeNetworkImpl
, which is defined in a derived class.
This is the most important function of the Plugin
class and creates an instance of compiled ExecutableNetwork
, which holds a backend-dependent compiled graph in an internal representation:
const ConfigMap &config) {
OV_ITT_SCOPED_TASK(itt::domains::TemplatePlugin, "Plugin::LoadExeNetworkImpl");
auto cfg = Configuration{ config, _cfg };
for (auto networkOutput : networkOutputs) {
auto output_precision = networkOutput.second->getPrecision();
THROW_IE_EXCEPTION <<
"Template device supports only U8, FP16 and FP32 output precision.";
}
}
for (auto networkInput : networkInputs) {
auto input_precision = networkInput.second->getTensorDesc().getPrecision();
THROW_IE_EXCEPTION <<
"Input image format " << input_precision <<
" is not supported yet.\n"
<< "Supported formats are: FP32, FP16, I16 and U8.";
}
}
if (function == nullptr) {
}
return std::make_shared<ExecutableNetwork>(function, cfg, std::static_pointer_cast<Plugin>(shared_from_this()));
}
std::shared_ptr< ngraph::Function > getFunction()
InputsDataMap getInputsInfo() const
OutputsDataMap getOutputsInfo() const
#define THROW_IE_EXCEPTION
std::map< std::string, InputInfo::Ptr > InputsDataMap
std::map< std::string, DataPtr > OutputsDataMap
Before a creation of an ExecutableNetwork
instance via a constructor, a plugin may check if a provided InferenceEngine::ICNNNetwork object is supported by a device. In the example above, the plugin checks precision information.
The very important part before creation of ExecutableNetwork
instance is to call TransformNetwork
method which applies ngraph transformation passes.
Actual graph compilation is done in the ExecutableNetwork
constructor. Refer to the ExecutableNetwork Implementation Guide for details.
NOTE: Actual configuration map used in ExecutableNetwork
is constructed as a base plugin configuration set via Plugin::SetConfig
, where some values are overwritten with config
passed to Plugin::LoadExeNetworkImpl
. Therefore, the config of Plugin::LoadExeNetworkImpl
has a higher priority.
TransformNetwork()
The function accepts a const shared pointer to ngraph::Function
object and performs the following steps:
- Deep copies a const object to a local object, which can later be modified.
- Applies common and plugin-specific transformations on a copied graph to make the graph more friendly to hardware operations. For details how to write custom plugin-specific transformation, please, refer to Writing ngraph transformations guide. See detailed topics about network representation:
std::shared_ptr<ngraph::Function> TransformNetwork(const std::shared_ptr<const ngraph::Function>& function) {
auto transformedNetwork = ngraph::clone_function(*function);
ngraph::pass::Manager passManager;
passManager.register_pass<ngraph::pass::DecomposeDivideMatcher>();
passManager.register_pass<ngraph::pass::ReluReluFusionMatcher>();
passManager.run_passes(transformedNetwork);
return transformedNetwork;
}
Definition: common_optimizations.hpp:23
NOTE: After all these transformations, a ngraph::Function
object contains operations which can be perfectly mapped to backend kernels. E.g. if backend has kernel computing A + B
operations at once, the TransformNetwork
function should contain a pass which fuses operations A
and B
into a single custom operation A + B
which fits backend kernels set.
QueryNetwork()
Use the method with the HETERO
mode, which allows to distribute network execution between different devices based on the ngraph::Node::get_rt_info()
map, which can contain the "affinity"
key. The QueryNetwork
method analyzes operations of provided network
and returns a list of supported operations via the InferenceEngine::QueryNetworkResult structure. The QueryNetwork
firstly applies TransformNetwork
passes to input ngraph::Function
argument. After this, the transformed network in ideal case contains only operations are 1:1 mapped to kernels in computational backend. In this case, it's very easy to analyze which operations is supposed (_backend
has a kernel for such operation or extensions for the operation is provided) and not supported (kernel is missed in _backend
):
- Store original names of all operations in input
ngraph::Function
- Apply
TransformNetwork
passes. Note, the names of operations in a transformed network can be different and we need to restore the mapping in the steps below.
- Construct
supported
and unsupported
maps which contains names of original operations. Note, that since the inference is performed using ngraph reference backend, the decision whether the operation is supported or not depends on whether the latest OpenVINO opset contains such operation.
QueryNetworkResult.supportedLayersMap
contains only operations which are fully supported by _backend
.
OV_ITT_SCOPED_TASK(itt::domains::TemplatePlugin, "Plugin::QueryNetwork");
Configuration cfg{config, _cfg, false};
if (function == nullptr) {
}
std::unordered_set<std::string> originalOps;
std::map<std::string, ngraph::NodeTypeInfo> friendlyNameToType;
for (auto&& node : function->get_ops()) {
originalOps.emplace(node->get_friendly_name());
friendlyNameToType[node->get_friendly_name()] = node->get_type_info();
}
auto transformedFunction = TransformNetwork(function);
std::unordered_set<std::string> supported;
std::unordered_set<std::string> unsupported;
auto opset = ngraph::get_opset4();
for (auto&& node : transformedFunction->get_ops()) {
if (opset.contains_type(friendlyNameToType[fusedLayerName])) {
supported.emplace(fusedLayerName);
} else {
unsupported.emplace(fusedLayerName);
}
}
}
}
for (auto&& unsupportedNode : unsupported) {
supported.erase(unsupportedNode);
}
for (auto&& node : function->get_ops()) {
for (auto&& inputNodeOutput : node->input_values()) {
if (ngraph::op::is_constant(inputNodeOutput.get_node()) || ngraph::op::is_parameter(inputNodeOutput.get_node())) {
supported.emplace(inputNodeOutput.get_node()->get_friendly_name());
}
}
for (auto&& outputs : node->outputs()) {
for (auto&& outputNodeInput : outputs.get_target_inputs()) {
if (ngraph::op::is_output(outputNodeInput.get_node())) {
supported.emplace(outputNodeInput.get_node()->get_friendly_name());
}
}
}
}
if (ngraph::op::is_constant(node) || ngraph::op::is_parameter(node)) {
supported.erase(node->get_friendly_name());
}
} else if (ngraph::op::is_output(node)) {
supported.erase(node->get_friendly_name());
}
}
}
for (auto&& layerName : supported) {
}
return res;
}
bool contains(const C &container, const T &element)
Simple helper function to check element presence in container container must provede stl-compliant fi...
Definition: ie_algorithm.hpp:33
std::vector< std::string > getFusedNamesVector(const std::shared_ptr< ngraph::Node > &node)
getFusedNamesVector return vector of fused names sorted in alphabetical order
std::map< std::string, std::string > supportedLayersMap
AddExtension()
Adds an extension of the InferenceEngine::IExtensionPtr type to a plugin. If a plugin does not support extensions, the method must throw an exception:
}
#define THROW_IE_EXCEPTION_WITH_STATUS(__status)
Throws an exception along with the status (which is eventually converted to the typed exception)
Definition: exception2status.hpp:19
SetConfig()
Sets new values for plugin configuration keys:
void Plugin::SetConfig(const ConfigMap &config) {
_cfg = Configuration{config, _cfg};
}
In the snippet above, the Configuration
class overrides previous configuration values with the new ones. All these values are used during backend specific graph compilation and execution of inference requests.
NOTE: The function must throw an exception if it receives an unsupported configuration key.
GetConfig()
Returns a current value for a specified configuration key:
InferenceEngine::Parameter Plugin::GetConfig(
const std::string& name,
const std::map<std::string, InferenceEngine::Parameter> & )
const {
return _cfg.Get(name);
}
The function is implemented with the Configuration::Get
method, which wraps an actual configuration key value to the InferenceEngine::Parameter and returns it.
NOTE: The function must throw an exception if it receives an unsupported configuration key.
GetMetric()
Returns a metric value for a metric with the name name
. A device metric is a static type of information from a plugin about its devices or device capabilities.
Examples of metrics:
- METRIC_KEY(AVAILABLE_DEVICES) - list of available devices that are required to implement. In this case, you can use all devices of the same
Template
type with automatic logic of the MULTI
device plugin.
- METRIC_KEY(FULL_DEVICE_NAME) - full device name. In this case, a particular device ID is specified in the
option
parameter as { CONFIG_KEY(KEY_DEVICE_ID), "deviceID" }
.
- METRIC_KEY(SUPPORTED_METRICS) - list of metrics supported by a plugin
- METRIC_KEY(SUPPORTED_CONFIG_KEYS) - list of configuration keys supported by a plugin that affects their behavior during a backend specific graph compilation or an inference requests execution
- METRIC_KEY(OPTIMIZATION_CAPABILITIES) - list of optimization capabilities of a device. For example, supported data types and special optimizations for them.
- Any other device-specific metrics. In this case, place metrics declaration and possible values to a plugin-specific public header file, for example,
template/template_config.hpp
. The example below demonstrates the definition of a new optimization capability value specific for a device:
DECLARE_TEMPLATE_METRIC_VALUE(HARDWARE_CONVOLUTION);
The snippet below provides an example of the implementation for GetMetric
:
InferenceEngine::Parameter Plugin::GetMetric(
const std::string& name,
const std::map<std::string, InferenceEngine::Parameter> & options)
const {
std::vector<std::string> supportedMetrics = {
}
else if (
METRIC_KEY(SUPPORTED_CONFIG_KEYS) == name) {
std::vector<std::string> configKeys = {
TEMPLATE_CONFIG_KEY(THROUGHPUT_STREAMS)};
for (auto&& configKey : streamExecutorConfigKeys) {
if (configKey != InferenceEngine::PluginConfigParams::KEY_CPU_THROUGHPUT_STREAMS) {
configKeys.emplace_back(configKey);
}
}
}
else if (
METRIC_KEY(AVAILABLE_DEVICES) == name) {
std::vector<std::string> availableDevices = { "" };
}
else if (
METRIC_KEY(FULL_DEVICE_NAME) == name) {
std::string name = "Template Device Full Name";
}
else if (
METRIC_KEY(OPTIMIZATION_CAPABILITIES) == name) {
std::vector<std::string> capabilities = {
METRIC_VALUE(FP32) };
}
else if (
METRIC_KEY(RANGE_FOR_ASYNC_INFER_REQUESTS) == name) {
using uint = unsigned int;
} else {
}
}
#define IE_SET_METRIC_RETURN(name,...)
Return metric value with specified name and arguments .... Example:
Definition: ie_metric_helpers.hpp:52
#define METRIC_VALUE(name)
std::vector< std::string > SupportedKeys()
Supported Configuration keys.
NOTE: If an unsupported metric key is passed to the function, it must throw an exception.
ImportNetworkImpl()
The importing network mechanism allows to import a previously exported backend specific graph and wrap it using an ExecutableNetwork object. This functionality is useful if backend specific graph compilation takes significant time and/or cannot be done on a target host device due to other reasons.
Implementation details: The base plugin class InferenceEngine::InferencePluginInternal implements InferenceEngine::InferencePluginInternal::ImportNetwork as follows: exports a device type (InferenceEngine::InferencePluginInternal::_pluginName) and then calls ImportNetworkImpl
, which is implemented in a derived class. If a plugin cannot use the base implementation InferenceEngine::InferencePluginInternal::ImportNetwork, it can override base implementation and define an output blob structure up to its needs. This can be useful if a plugin exports a blob in a special format for integration with other frameworks where a common Inference Engine header from a base class implementation is not appropriate.
During export of backend specific graph using ExecutableNetwork::Export
, a plugin may export any type of information it needs to import a compiled graph properly and check its correctness. For example, the export information may include:
- Compilation options (state of
Plugin::_cfg
structure)
- Information about a plugin and a device type to check this information later during the import and throw an exception if the
model
stream contains wrong data. For example, if devices have different capabilities and a graph compiled for a particular device cannot be used for another, such type of information must be stored and checked during the import.
- Compiled backend specific graph itself
- Information about precisions and shapes set by the user
OV_ITT_SCOPED_TASK(itt::domains::TemplatePlugin, "Plugin::ImportNetworkImpl");
Configuration cfg(config);
auto exec_network_impl = std::make_shared<ExecutableNetwork>(model, cfg,
std::static_pointer_cast<Plugin>(shared_from_this()));
}
InferenceEngine::ExecutableNetwork make_executable_network(std::shared_ptr< T > impl)
Create an execuable network public C++ object wrapper based on internal inplementation.
Definition: ie_executable_network_base.hpp:124
Create Instance of Plugin Class
Inference Engine plugin library must export only one function creating a plugin instance using IE_DEFINE_PLUGIN_CREATE_FUNCTION macro:
#define IE_DEFINE_PLUGIN_CREATE_FUNCTION(PluginType, version,...)
Defines the exported CreatePluginEngine function which is used to create a plugin instance.
Definition: ie_iplugin_internal.hpp:283
Next step in a plugin library implementation is the ExecutableNetwork class.