Introduction to Inference Engine Device Query API¶
The Inference Engine Query API (C++)¶
The OpenVINO™ toolkit supports inferencing with several types of devices (processors or accelerators). This page provides a high-level description of the process of querying device properties and configuration values at runtime. Refer to the Hello Query Device Sample sources and Multi-Device Plugin guide for examples of using the Inference Engine Query API in user applications.
Using the Inference Engine Query API in Your Code¶
The Inference Engine
Core class provides the following API to query device information, set or get different device configuration properties:
InferenceEngine::Core::GetAvailableDevices- Provides a list of available devices. If there are more than one instance of a specific device, the devices are enumerated with
suffixis a unique string identifier. The device name can be passed to all methods of the
InferenceEngine::Coreclass that work with devices, for example
InferenceEngine::Core::SetConfig- Sets a new value for the configuration key.
InferenceEngine::ExecutableNetwork class is also extended to support the Query API:
Query API in the Core Class¶
InferenceEngine::Core core; std::vector<std::string> availableDevices = core.GetAvailableDevices();
The function returns list of available devices, for example:
MYRIAD.1.2-ma2480 MYRIAD.1.4-ma2480 FPGA.0 FPGA.1 CPU GPU.0 GPU.1 ...
Each device name can then be passed to:
The code below demonstrates how to understand whether the
HETERO device dumps GraphViz
.dot files with split graphs during the split stage:
InferenceEngine::Core core; bool dumpDotFile = core.GetConfig("HETERO", HETERO_CONFIG_KEY(DUMP_GRAPH_DOT)).as<bool>();
For documentation about common configuration keys, refer to
ie_plugin_config.hpp. Device specific configuration keys can be found in corresponding plugin folders.
To extract device properties such as available device, device name, supported configuration keys, and others, use the
InferenceEngine::Core core; std::string cpuDeviceName = core.GetMetric("GPU", METRIC_KEY(FULL_DEVICE_NAME)).as<std::string>();
A returned value appears as follows:
Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz.
All metrics have a type, which is specified during metric instantiation. The list of common device-agnostic metrics can be found in
ie_plugin_config.hpp. Device specific metrics (for example, for HDDL or MYRIAD devices) can be found in corresponding plugin folders.
Query API in the ExecutableNetwork Class¶
The method is used to get an executable network specific metric such as
InferenceEngine::Core core; auto network = core.ReadNetwork("sample.xml"); auto exeNetwork = core.LoadNetwork(network, "CPU"); auto nireq = exeNetwork.GetMetric(METRIC_KEY(OPTIMAL_NUMBER_OF_INFER_REQUESTS)).as<unsigned int>();
Or the current temperature of the
InferenceEngine::Core core; auto network = core.ReadNetwork("sample.xml"); auto exeNetwork = core.LoadNetwork(network, "MYRIAD"); float temperature = exeNetwork.GetMetric(METRIC_KEY(DEVICE_THERMAL)).as<float>();
The method is used to get information about configuration values the executable network has been created with:
InferenceEngine::Core core; auto network = core.ReadNetwork("sample.xml"); auto exeNetwork = core.LoadNetwork(network, "CPU"); auto ncores = exeNetwork.GetConfig(PluginConfigParams::KEY_CPU_THREADS_NUM).as<std::string>();