Query Device Properties - Configuration¶
The OpenVINO™ toolkit supports inference with several types of devices (processors or accelerators). This section provides a high-level description of the process of querying of different device properties and configuration values at runtime.
OpenVINO runtime has two types of properties:
Read only properties which provide information about the devices (such as device name, thermal state, execution capabilities, etc.) and information about configuration values used to compile the model (
ov::CompiledModel
) .Mutable properties which are primarily used to configure the
ov::Core::compile_model
process and affect final inference on a specific set of devices. Such properties can be set globally per device viaov::Core::set_property
or locally for particular model in theov::Core::compile_model
and theov::Core::query_model
calls.
An OpenVINO property is represented as a named constexpr variable with a given string name and a type. The following example represents a read-only property with a C++ name of ov::available_devices
, a string name of AVAILABLE_DEVICES
and a type of std::vector<std::string>
:
static constexpr Property<std::vector<std::string>, PropertyMutability::RO> available_devices{"AVAILABLE_DEVICES"};
Refer to the Hello Query Device С++ Sample sources and the Multi-Device execution documentation for examples of using setting and getting properties in user applications.
Get a Set of Available Devices¶
Based on the ov::available_devices
read-only property, OpenVINO Core collects information about currently available devices enabled by OpenVINO plugins and returns information, using the ov::Core::get_available_devices
method:
ov::Core core;
std::vector<std::string> available_devices = core.get_available_devices();
core = Core()
available_devices = core.available_devices
The function returns a list of available devices, for example:
MYRIAD.1.2-ma2480
MYRIAD.1.4-ma2480
CPU
GPU.0
GPU.1
If there are multiple instances of a specific device, the devices are enumerated with a suffix comprising a full stop and a unique string identifier, such as .suffix
. Each device name can then be passed to:
ov::Core::compile_model
to load the model to a specific device with specific configuration properties.ov::Core::get_property
to get common or device-specific properties.All other methods of the
ov::Core
class that acceptdeviceName
.
Working with Properties in Your Code¶
The ov::Core
class provides the following method to query device information, set or get different device configuration properties:
ov::Core::get_property
- Gets the current value of a specific property.ov::Core::set_property
- Sets a new value for the property globally for specifieddevice_name
.
The ov::CompiledModel
class is also extended to support the properties:
For documentation about OpenVINO common device-independent properties, refer to the openvino/runtime/properties.hpp
. Device-specific configuration keys can be found in corresponding device folders (for example, openvino/runtime/intel_gpu/properties.hpp
).
Working with Properties via Core¶
Getting Device Properties¶
The code below demonstrates how to query HETERO
device priority of devices which will be used to infer the model:
auto device_priorites = core.get_property("HETERO", ov::device::priorities);
device_priorites = core.get_property("HETERO", "MULTI_DEVICE_PRIORITIES")
Note
All properties have a type, which is specified during property declaration. Based on this, actual type under auto
is automatically deduced by C++ compiler.
To extract device properties such as available devices (ov::available_devices
), device name (ov::device::full_name
), supported properties (ov::supported_properties
), and others, use the ov::Core::get_property
method:
auto cpu_device_name = core.get_property("CPU", ov::device::full_name);
cpu_device_name = core.get_property("CPU", "FULL_DEVICE_NAME")
A returned value appears as follows: Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz
.
Note
In order to understand a list of supported properties on ov::Core
or ov::CompiledModel
levels, use ov::supported_properties
which contains a vector of supported property names. Properties which can be changed, has ov::PropertyName::is_mutable
returning the true
value. Most of the properites which are changable on ov::Core
level, cannot be changed once the model is compiled, so it becomes immutable read-only property.
Configure a Work with a Model¶
The ov::Core
methods like:
accept a selection of properties as last arguments. Each of the properties should be used as a function call to pass a property value with a specified property type.
auto compiled_model = core.compile_model(model, "CPU",
ov::hint::performance_mode(ov::hint::PerformanceMode::THROUGHPUT),
ov::hint::inference_precision(ov::element::f32));
config = {"PERFORMANCE_HINT": "THROUGHPUT",
"INFERENCE_PRECISION_HINT": "f32"}
compiled_model = core.compile_model(model, "CPU", config)
The example below specifies hints that a model should be compiled to be inferred with multiple inference requests in parallel to achieve best throughput, while inference should be performed without accuracy loss with FP32 precision.
Setting Properties Globally¶
ov::Core::set_property
with a given device name should be used to set global configuration properties, which are the same across multiple ov::Core::compile_model
, ov::Core::query_model
, and other calls. However, setting properties on a specific ov::Core::compile_model
call applies properties only for the current call:
// set letency hint is a default for CPU
core.set_property("CPU", ov::hint::performance_mode(ov::hint::PerformanceMode::LATENCY));
// compiled with latency configuration hint
auto compiled_model_latency = core.compile_model(model, "CPU");
// compiled with overriden ov::hint::performance_mode value
auto compiled_model_thrp = core.compile_model(model, "CPU",
ov::hint::performance_mode(ov::hint::PerformanceMode::THROUGHPUT));
# latency hint is a default for CPU
core.set_property("CPU", {"PERFORMANCE_HINT": "LATENCY"})
# compiled with latency configuration hint
compiled_model_latency = core.compile_model(model, "CPU")
# compiled with overriden performance hint value
config = {"PERFORMANCE_HINT": "THROUGHPUT"}
compiled_model_thrp = core.compile_model(model, "CPU", config)
Properties on CompiledModel Level¶
Getting Property¶
The ov::CompiledModel::get_property
method is used to get property values the compiled model has been created with or a compiled model level property such as ov::optimal_number_of_infer_requests
:
auto compiled_model = core.compile_model(model, "CPU");
auto nireq = compiled_model.get_property(ov::optimal_number_of_infer_requests);
compiled_model = core.compile_model(model, "CPU")
nireq = compiled_model.get_property("OPTIMAL_NUMBER_OF_INFER_REQUESTS")
Or the current temperature of the MYRIAD
device:
auto compiled_model = core.compile_model(model, "MYRIAD");
float temperature = compiled_model.get_property(ov::device::thermal);
compiled_model = core.compile_model(model, "MYRIAD")
temperature = compiled_model.get_property("DEVICE_THERMAL")
Or the number of threads that would be used for inference on CPU
device:
auto compiled_model = core.compile_model(model, "CPU");
auto nthreads = compiled_model.get_property(ov::inference_num_threads);
compiled_model = core.compile_model(model, "CPU")
nthreads = compiled_model.get_property("INFERENCE_NUM_THREADS")
Setting Properties for Compiled Model¶
The only mode that supports this method is Multi-Device execution :
auto compiled_model = core.compile_model(model, "MULTI",
ov::device::priorities("CPU", "GPU"));
// change the order of priorities
compiled_model.set_property(ov::device::priorities("GPU", "CPU"));
config = {"MULTI_DEVICE_PRIORITIES": "CPU,GPU"}
compiled_model = core.compile_model(model, "MULTI", config)
# change the order of priorities
compiled_model.set_property({"MULTI_DEVICE_PRIORITIES": "GPU,CPU"})