Group Plugin base classes#

group ov_dev_api_plugin_api

A set of base and helper classes to implement a plugin class.

Defines

OV_CREATE_PLUGIN#

Defines a name of a function creating plugin instance.

OV_DEFINE_PLUGIN_CREATE_FUNCTION(PluginType, version, ...)#

Defines the exported OV_CREATE_PLUGIN function which is used to create a plugin instance.

Variables

static constexpr Property<std::vector<PropertyName>, PropertyMutability::RO> caching_properties = {"CACHING_PROPERTIES"}#

Read-only property to get a std::vector<PropertyName> of properties which should affect the hash calculation for model cache.

static constexpr Property<bool, PropertyMutability::RW> exclusive_async_requests = {"EXCLUSIVE_ASYNC_REQUESTS"}#

Allow to create exclusive_async_requests with one executor.

static constexpr Property<std::string, PropertyMutability::WO> config_device_id = {"CONFIG_DEVICE_ID"}#

the property for setting of required device for which config to be updated values: device id starts from “0” - first device, “1” - second device, etc note: plugin may have different devices naming convention

static constexpr Property<int32_t, PropertyMutability::RW> threads_per_stream = {"THREADS_PER_STREAM"}#

Limit #threads that are used by IStreamsExecutor to execute parallel_for calls.

static constexpr Property<std::string, PropertyMutability::RO> compiled_model_runtime_properties{"COMPILED_MODEL_RUNTIME_PROPERTIES"}#

It contains compiled_model_runtime_properties information to make plugin runtime can check whether it is compatible with the cached compiled model, the result is returned by get_property() calling.

The information details are defined by plugin itself, each plugin may require different runtime contents. For example, CPU plugin will contain OV version, while GPU plugin will contain OV and GPU driver version, etc. Core doesn’t understand its content and only read it from plugin and write it into blob header.

static constexpr Property<bool, PropertyMutability::RO> compiled_model_runtime_properties_supported{"COMPILED_MODEL_RUNTIME_PROPERTIES_SUPPORTED"}#

Check whether the attached compiled_model_runtime_properties is supported by this device runtime.

static constexpr Property<float, PropertyMutability::RW> query_model_ratio = {"QUERY_MODEL_RATIO"}#

Read-write property to set the percentage of the estimated model size which is used to determine the query model results for further processing.

interface ICore#
#include <icore.hpp>

Minimal ICore interface to allow plugin to get information from Core OpenVINO class.

Public Functions

virtual std::shared_ptr<ov::Model> read_model(const std::string &model, const ov::Tensor &weights, bool frontend_mode = false) const = 0#

Reads IR xml and bin (with the same name) files.

Parameters:
  • model – string with IR

  • weights – shared pointer to constant blob with weights

  • frontend_mode – read network without post-processing or other transformations

Returns:

shared pointer to ov::Model

virtual std::shared_ptr<ov::Model> read_model(const std::string &model_path, const std::string &bin_path) const = 0#

Reads IR xml and bin files.

Parameters:
  • model_path – path to IR file

  • bin_path – path to bin file, if path is empty, will try to read bin file with the same name as xml and if bin file with the same name was not found, will load IR without weights.

Returns:

shared pointer to ov::Model

virtual ov::SoPtr<ov::ICompiledModel> compile_model(const std::shared_ptr<const ov::Model> &model, const std::string &device_name, const ov::AnyMap &config = {}) const = 0#

Creates a compiled mdel from a model object.

Users can create as many models as they need and use them simultaneously (up to the limitation of the hardware resources)

Parameters:
  • model – OpenVINO Model

  • device_name – Name of device to load model to

  • config – Optional map of pairs: (config parameter name, config parameter value) relevant only for this load operation

Returns:

A pointer to compiled model

virtual ov::SoPtr<ov::ICompiledModel> compile_model(const std::shared_ptr<const ov::Model> &model, const ov::SoPtr<ov::IRemoteContext> &context, const ov::AnyMap &config = {}) const = 0#

Creates a compiled model from a model object.

Users can create as many models as they need and use them simultaneously (up to the limitation of the hardware resources)

Parameters:
  • model – OpenVINO Model

  • context – “Remote” (non-CPU) accelerator device-specific execution context to use

  • config – Optional map of pairs: (config parameter name, config parameter value) relevant only for this load operation

Returns:

A pointer to compiled model

virtual ov::SoPtr<ov::ICompiledModel> compile_model(const std::string &model_path, const std::string &device_name, const ov::AnyMap &config) const = 0#

Creates a compiled model from a model file.

Users can create as many models as they need and use them simultaneously (up to the limitation of the hardware resources)

Parameters:
  • model_path – Path to model

  • device_name – Name of device to load model to

  • config – Optional map of pairs: (config parameter name, config parameter value) relevant only for this load operation

Returns:

A pointer to compiled model

virtual ov::SoPtr<ov::ICompiledModel> compile_model(const std::string &model_str, const ov::Tensor &weights, const std::string &device_name, const ov::AnyMap &config) const = 0#

Creates a compiled model from a model memory.

Users can create as many models as they need and use them simultaneously (up to the limitation of the hardware resources)

Parameters:
  • model_str – String data of model

  • weightsModel’s weights

  • device_name – Name of device to load model to

  • config – Optional map of pairs: (config parameter name, config parameter value) relevant only for this load operation

Returns:

A pointer to compiled model

virtual ov::SoPtr<ov::ICompiledModel> import_model(std::istream &model, const std::string &device_name, const ov::AnyMap &config = {}) const = 0#

Creates a compiled model from a previously exported model.

Parameters:
  • model – model stream

  • device_name – Name of device load executable model on

  • config – Optional map of pairs: (config parameter name, config parameter value) relevant only for this load operation*

Returns:

A pointer to compiled model

virtual ov::SoPtr<ov::ICompiledModel> import_model(std::istream &modelStream, const ov::SoPtr<ov::IRemoteContext> &context, const ov::AnyMap &config = {}) const = 0#

Creates a compiled model from a previously exported model.

Parameters:
  • model – model stream

  • context – Remote context

  • config – Optional map of pairs: (config parameter name, config parameter value) relevant only for this load operation*

Returns:

A pointer to compiled model

virtual ov::SupportedOpsMap query_model(const std::shared_ptr<const ov::Model> &model, const std::string &device_name, const ov::AnyMap &config) const = 0#

Query device if it supports specified network with specified configuration.

Parameters:
  • model – OpenVINO Model

  • device_name – A name of a device to query

  • config – Optional map of pairs: (config parameter name, config parameter value)

Returns:

An object containing a map of pairs a layer name -> a device name supporting this layer.

virtual std::vector<std::string> get_available_devices() const = 0#

Returns devices available for neural networks inference.

Returns:

A vector of devices. The devices are returned as { CPU, GPU.0, GPU.1, MYRIAD } If there more than one device of specific type, they are enumerated with .# suffix.

virtual ov::SoPtr<ov::IRemoteContext> create_context(const std::string &device_name, const AnyMap &args) const = 0#

Create a new shared context object on specified accelerator device using specified plugin-specific low level device API parameters (device handle, pointer, etc.)

Parameters:
  • device_name – Name of a device to create new shared context on.

  • params – Map of device-specific shared context parameters.

Returns:

A shared pointer to a created remote context.

virtual ov::SoPtr<ov::IRemoteContext> get_default_context(const std::string &device_name) const = 0#

Get a pointer to default shared context object for the specified device.

Parameters:

device_name – - A name of a device to get create shared context from.

Returns:

A shared pointer to a default remote context.

virtual Any get_property(const std::string &device_name, const std::string &name, const AnyMap &arguments) const = 0#

Gets properties related to device behaviour.

Parameters:
  • device_name – Name of a device to get a property value.

  • nameProperty name.

  • arguments – Additional arguments to get a property.

Returns:

Value of a property corresponding to the property name.

template<typename T, PropertyMutability M>
inline T get_property(const std::string &device_name, const Property<T, M> &property) const#

Gets properties related to device behaviour.

Template Parameters:
  • T – Type of a returned value.

  • MProperty mutability.

Parameters:
  • deviceName – Name of a device to get a property value.

  • propertyProperty object.

Returns:

Property value.

template<typename T, PropertyMutability M>
inline T get_property(const std::string &device_name, const Property<T, M> &property, const AnyMap &arguments) const#

Gets properties related to device behaviour.

Template Parameters:
  • T – Type of a returned value.

  • MProperty mutability.

Parameters:
  • deviceName – Name of a device to get a property value.

  • propertyProperty object.

  • arguments – Additional arguments to get a property.

Returns:

Property value.

virtual AnyMap get_supported_property(const std::string &full_device_name, const AnyMap &properties, const bool keep_core_property = true) const = 0#

Get only properties that are supported by specified device.

Parameters:
  • full_device_name – Name of a device (can be either virtual or hardware)

  • properties – Properties that can contains configs that are not supported by device

  • keep_core_property – Whether to return core-level properties

Returns:

map of properties that are supported by device

virtual ~ICore()#

Default virtual destructor.