Group Plugin base classes¶
- group ov_dev_api_plugin_api
A set of base and helper classes to implement a plugin class.
Defines
-
OV_CREATE_PLUGIN¶
Defines a name of a function creating plugin instance.
-
OV_DEFINE_PLUGIN_CREATE_FUNCTION(PluginType, version, ...)¶
Defines the exported
OV_CREATE_PLUGIN
function which is used to create a plugin instance.
Variables
-
static constexpr Property<std::vector<PropertyName>, PropertyMutability::RO> caching_properties = {"CACHING_PROPERTIES"}¶
Read-only property to get a std::vector<PropertyName> of properties which should affect the hash calculation for model cache.
-
static constexpr Property<bool, PropertyMutability::RW> exclusive_async_requests = {"EXCLUSIVE_ASYNC_REQUESTS"}¶
Allow to create exclusive_async_requests with one executor.
-
static constexpr Property<std::string, PropertyMutability::WO> config_device_id = {"CONFIG_DEVICE_ID"}¶
the property for setting of required device for which config to be updated values: device id starts from “0” - first device, “1” - second device, etc note: plugin may have different devices naming convention
-
static constexpr Property<ov::threading::IStreamsExecutor::ThreadBindingType, PropertyMutability::RW> cpu_bind_thread{"CPU_BIND_THREAD"}¶
The name for setting CPU affinity per thread option.
It is passed to Core::get_property()
The following options are implemented only for the TBB as a threading option ov::threading::IStreamsExecutor::ThreadBindingType::NUMA (pinning threads to NUMA nodes, best for real-life, contented cases) on the Windows and MacOS* this option behaves as YES ov::threading::IStreamsExecutor::ThreadBindingType::HYBRID_AWARE (let the runtime to do pinning to the cores types, e.g. prefer the “big” cores for latency tasks) on the hybrid CPUs this option is default
Also, the settings are ignored, if the OpenVINO compiled with OpenMP and any affinity-related OpenMP’s environment variable is set (as affinity is configured explicitly)
-
static constexpr Property<size_t, PropertyMutability::RW> threads_per_stream = {"THREADS_PER_STREAM"}¶
Limit #threads that are used by IStreamsExecutor to execute
parallel_for
calls.
-
static constexpr Property<std::string, PropertyMutability::RO> compiled_model_runtime_properties{"COMPILED_MODEL_RUNTIME_PROPERTIES"}¶
It contains compiled_model_runtime_properties information to make plugin runtime can check whether it is compatible with the cached compiled model, the result is returned by get_property() calling.
The information details are defined by plugin itself, each plugin may require different runtime contents. For example, CPU plugin will contain OV version, while GPU plugin will contain OV and GPU driver version, etc. Core doesn’t understand its content and only read it from plugin and write it into blob header.
-
static constexpr Property<bool, PropertyMutability::RO> compiled_model_runtime_properties_supported{"COMPILED_MODEL_RUNTIME_PROPERTIES_SUPPORTED"}¶
Check whether the attached compiled_model_runtime_properties is supported by this device runtime.
-
interface ICore
- #include <icore.hpp>
Minimal ICore interface to allow plugin to get information from Core OpenVINO class.
Subclassed by InferenceEngine::ICore
Public Functions
-
virtual std::shared_ptr<ov::Model> read_model(const std::string &model, const ov::Tensor &weights, bool frontend_mode = false) const = 0
Reads IR xml and bin (with the same name) files.
- Parameters
model – string with IR
weights – shared pointer to constant blob with weights
frontend_mode – read network without post-processing or other transformations
- Returns
shared pointer to ov::Model
-
virtual std::shared_ptr<ov::Model> read_model(const std::string &model_path, const std::string &bin_path) const = 0
Reads IR xml and bin files.
- Parameters
model_path – path to IR file
bin_path – path to bin file, if path is empty, will try to read bin file with the same name as xml and if bin file with the same name was not found, will load IR without weights.
- Returns
shared pointer to ov::Model
Creates a compiled mdel from a model object.
Users can create as many models as they need and use them simultaneously (up to the limitation of the hardware resources)
- Parameters
model – OpenVINO Model
device_name – Name of device to load model to
config – Optional map of pairs: (config parameter name, config parameter value) relevant only for this load operation
- Returns
A pointer to compiled model
Creates a compiled model from a model object.
Users can create as many models as they need and use them simultaneously (up to the limitation of the hardware resources)
- Parameters
model – OpenVINO Model
context – “Remote” (non-CPU) accelerator device-specific execution context to use
config – Optional map of pairs: (config parameter name, config parameter value) relevant only for this load operation
- Returns
A pointer to compiled model
-
virtual ov::SoPtr<ov::ICompiledModel> compile_model(const std::string &model_path, const std::string &device_name, const ov::AnyMap &config) const = 0
Creates a compiled model from a model file.
Users can create as many models as they need and use them simultaneously (up to the limitation of the hardware resources)
- Parameters
model_path – Path to model
device_name – Name of device to load model to
config – Optional map of pairs: (config parameter name, config parameter value) relevant only for this load operation
- Returns
A pointer to compiled model
-
virtual ov::SoPtr<ov::ICompiledModel> compile_model(const std::string &model_str, const ov::Tensor &weights, const std::string &device_name, const ov::AnyMap &config) const = 0
Creates a compiled model from a model memory.
Users can create as many models as they need and use them simultaneously (up to the limitation of the hardware resources)
- Parameters
model_str – String data of model
weights – Model’s weights
device_name – Name of device to load model to
config – Optional map of pairs: (config parameter name, config parameter value) relevant only for this load operation
- Returns
A pointer to compiled model
-
virtual ov::SoPtr<ov::ICompiledModel> import_model(std::istream &model, const std::string &device_name, const ov::AnyMap &config = {}) const = 0
Creates a compiled model from a previously exported model.
- Parameters
model – model stream
device_name – Name of device load executable model on
config – Optional map of pairs: (config parameter name, config parameter value) relevant only for this load operation*
- Returns
A pointer to compiled model
-
virtual ov::SoPtr<ov::ICompiledModel> import_model(std::istream &modelStream, const ov::SoPtr<ov::IRemoteContext> &context, const ov::AnyMap &config = {}) const = 0
Creates a compiled model from a previously exported model.
- Parameters
model – model stream
context – Remote context
config – Optional map of pairs: (config parameter name, config parameter value) relevant only for this load operation*
- Returns
A pointer to compiled model
Query device if it supports specified network with specified configuration.
- Parameters
model – OpenVINO Model
device_name – A name of a device to query
config – Optional map of pairs: (config parameter name, config parameter value)
- Returns
An object containing a map of pairs a layer name -> a device name supporting this layer.
-
virtual std::vector<std::string> get_available_devices() const = 0
Returns devices available for neural networks inference.
- Returns
A vector of devices. The devices are returned as { CPU, GPU.0, GPU.1, MYRIAD } If there more than one device of specific type, they are enumerated with .# suffix.
-
virtual ov::SoPtr<ov::IRemoteContext> create_context(const std::string &device_name, const AnyMap &args) const = 0
Create a new shared context object on specified accelerator device using specified plugin-specific low level device API parameters (device handle, pointer, etc.)
- Parameters
device_name – Name of a device to create new shared context on.
params – Map of device-specific shared context parameters.
- Returns
A shared pointer to a created remote context.
-
virtual ov::SoPtr<ov::IRemoteContext> get_default_context(const std::string &device_name) const = 0
Get a pointer to default shared context object for the specified device.
- Parameters
device_name – - A name of a device to get create shared context from.
- Returns
A shared pointer to a default remote context.
-
virtual Any get_property(const std::string &device_name, const std::string &name, const AnyMap &arguments) const = 0
Gets properties related to device behaviour.
- Parameters
device_name – Name of a device to get a property value.
name – Property name.
arguments – Additional arguments to get a property.
- Returns
Value of a property corresponding to the property name.
-
template<typename T, PropertyMutability M>
inline T get_property(const std::string &device_name, const Property<T, M> &property) const Gets properties related to device behaviour.
-
template<typename T, PropertyMutability M>
inline T get_property(const std::string &device_name, const Property<T, M> &property, const AnyMap &arguments) const Gets properties related to device behaviour.
-
virtual AnyMap get_supported_property(const std::string &full_device_name, const AnyMap &properties) const = 0
Get only properties that are supported by specified device.
- Parameters
full_device_name – Name of a device (can be either virtual or hardware)
properties – Properties that can contains configs that are not supported by device
- Returns
map of properties that are supported by device
-
virtual ~ICore()
Default virtual destructor.
-
virtual std::shared_ptr<ov::Model> read_model(const std::string &model, const ov::Tensor &weights, bool frontend_mode = false) const = 0
-
OV_CREATE_PLUGIN¶