Query device properties, configuration

Query device properties and devices configuration

The OpenVINO™ toolkit supports inferencing with several types of devices (processors or accelerators). This section provides a high-level description of the process of querying of different device properties and configuration values at runtime.

OpenVINO runtime has two types of properties:

  • Read only properties which provides information about the devices (such device name, termal, execution capabilities, etc) and information about ov::CompiledModel to understand what configuration values were used to compile the model with.

  • Mutable properties which are primarily used to configure ov::Core::compile_model process and affect final inference on the specific set of devices. Such properties can be set globally per device via ov::Core::set_property or locally for particular model in ov::Core::compile_model and ov::Core::query_model calls.

OpenVINO property is represented as a named constexpr variable with a given string name and type (see ). Example:

static constexpr Property<std::vector<std::string>, PropertyMutability::RO> available_devices{"AVAILABLE_DEVICES"};

represents a read-only property with C++ name ov::available_devices, string name AVAILABLE_DEVICES and type std::vector<std::string>.

Refer to the Hello Query Device С++ Sample sources and the Multi-Device execution documentation for examples of using setting and getting properties in user applications.

Get a set of available devices

Based on read-only property ov::available_devices, OpenVINO Core collects information about currently available devices enabled by OpenVINO plugins and returns information using the ov::Core::get_available_devices method:

ov::Core core;
std::vector<std::string> available_devices = core.get_available_devices();
core = Core()
available_devices = core.available_devices

The function returns a list of available devices, for example:


If there are more than one instance of a specific device, the devices are enumerated with .suffix where suffix is a unique string identifier. Each device name can then be passed to:

Working with properties in Your Code

The ov::Core class provides the following method to query device information, set or get different device configuration properties:

The ov::CompiledModel class is also extended to support the properties:

For documentation about OpenVINO common device-independent properties, refer to openvino/runtime/properties.hpp. Device specific configuration keys can be found in corresponding device folders (for example, openvino/runtime/intel_gpu/properties.hpp).

Working with properties via Core

Getting device properties

The code below demonstrates how to query HETERO device priority of devices which will be used to infer the model:

auto device_priorites = core.get_property("HETERO", ov::device::priorities);
device_priorites = core.get_property("HETERO","MULTI_DEVICE_PRIORITIES")


All properties have a type, which is specified during property declaration. Based on this, actual type under auto is automatically deduced by C++ compiler.

To extract device properties such as available devices (ov::available_devices), device name (ov::device::full_name), supported properties (ov::supported_properties), and others, use the ov::Core::get_property method:

auto cpu_device_name = core.get_property("CPU", ov::device::full_name);
cpu_device_name = core.get_property("CPU", "FULL_DEVICE_NAME")

A returned value appears as follows: Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz.


In order to understand a list of supported properties on ov::Core or ov::CompiledModel levels, use ov::supported_properties which contains a vector of supported property names. Properties which can be changed, has ov::PropertyName::is_mutable returning the true value. Most of the properites which are changable on ov::Core level, cannot be changed once the model is compiled, so it becomes immutable read-only property.

Configure a work with a model

ov::Core methods like:

accept variadic list of properties as last arguments. Each property in such parameters lists should be used as function call to pass property value with specified property type.

compiled_model = core.compile_model(model, "CPU", config)

The example below specifies hints that a model should be compiled to be inferenced with multiple inference requests in parallel to achive best throughput while inference should be performed without accuracy loss with FP32 precision.

Setting properties globally

ov::Core::set_property with a given device name should be used to set global configuration properties which are the same accross multiple ov::Core::compile_model, ov::Core::query_model, etc. calls, while setting property on the specific ov::Core::compile_model call applies properties only for current call:

// set letency hint is a default for CPU
core.set_property("CPU", ov::hint::performance_mode(ov::hint::PerformanceMode::LATENCY));
// compiled with latency configuration hint
auto compiled_model_latency = core.compile_model(model, "CPU");
// compiled with overriden ov::hint::performance_mode value
auto compiled_model_thrp = core.compile_model(model, "CPU",
# latency hint is a default for CPU
core.set_property("CPU", {"PERFORMANCE_HINT": "LATENCY"})
# compiled with latency configuration hint
compiled_model_latency = core.compile_model(model, "CPU")
# compiled with overriden performance hint value
compiled_model_thrp = core.compile_model(model, "CPU", config)

Properties on CompiledModel level

Getting property

The ov::CompiledModel::get_property method is used to get property values the compiled model has been created with or a compiled model level property such as ov::optimal_number_of_infer_requests :

auto compiled_model = core.compile_model(model, "CPU");
auto nireq = compiled_model.get_property(ov::optimal_number_of_infer_requests);
compiled_model = core.compile_model(model, "CPU")
nireq = compiled_model.get_property("OPTIMAL_NUMBER_OF_INFER_REQUESTS");

Or the current temperature of the MYRIAD device:

auto compiled_model = core.compile_model(model, "MYRIAD");
float temperature = compiled_model.get_property(ov::device::thermal);
compiled_model = core.compile_model(model, "MYRIAD")
temperature = compiled_model.get_property("DEVICE_THERMAL")

Or the number of threads that would be used for inference on CPU device:

auto compiled_model = core.compile_model(model, "CPU");
auto nthreads = compiled_model.get_property(ov::inference_num_threads);
compiled_model = core.compile_model(model, "CPU")
nthreads = compiled_model.get_property("INFERENCE_NUM_THREADS")

Setting properties for compiled model

The only mode that supports this method is Multi-Device execution :

auto compiled_model = core.compile_model(model, "MULTI",
    ov::device::priorities("CPU", "GPU"));
// change the order of priorities
compiled_model.set_property(ov::device::priorities("GPU", "CPU"));
compiled_model = core.compile_model(model, "MULTI", config)
# change the order of priorities
compiled_model.set_property({"MULTI_DEVICE_PRIORITIES": "GPU,CPU"})