Inference Device Support¶
OpenVINO™ Runtime can infer deep learning models using the following device types:
For a more detailed list of hardware, see Supported Devices.
Feature Support Matrix¶
The table below demonstrates support of key features by OpenVINO device plugins.
Capability |
CPU |
GPU |
NPU |
GNA |
---|---|---|---|---|
Yes |
Yes |
Partial |
No |
|
Yes |
Yes |
Yes |
Partial |
|
No |
Yes |
No |
No |
|
Yes (Intel® x86-64 only) |
Yes |
Yes |
No |
|
Yes |
Partial |
Yes |
Yes |
|
Yes |
Partial |
No |
No |
|
Yes |
No |
No* |
Yes |
|
Yes |
Yes |
Partial |
No |
|
Yes |
No |
Yes |
Yes |
|
Yes |
Yes |
Partial |
No |
For more details on plugin-specific feature limitations, see the corresponding plugin pages.
Enumerating Available Devices¶
The OpenVINO Runtime API features dedicated methods of enumerating devices and their capabilities. See the Hello Query Device C++ Sample. This is an example output from the sample (truncated to device names only):
./hello_query_device
Available devices:
Device: CPU
...
Device: GPU.0
...
Device: GPU.1
...
Device: GNA
A simple programmatic way to enumerate the devices and use with the multi-device is as follows:
ov::Core core;
std::shared_ptr<ov::Model> model = core.read_model("sample.xml");
std::vector<std::string> availableDevices = core.get_available_devices();
std::string all_devices;
for (auto && device : availableDevices) {
all_devices += device;
all_devices += ((device == availableDevices[availableDevices.size()-1]) ? "" : ",");
}
ov::CompiledModel compileModel = core.compile_model(model, "MULTI",
ov::device::priorities(all_devices));
Beyond the typical “CPU”, “GPU”, and so on, when multiple instances of a device are available, the names are more qualified. For example, this is how two GPUs can be listed (iGPU is always GPU.0):
...
Device: GPU.0
...
Device: GPU.1
So, the explicit configuration to use both would be “MULTI:GPU.1,GPU.0”. Accordingly, the code that loops over all available devices of the “GPU” type only is as follows:
ov::Core core;
std::vector<std::string> GPUDevices = core.get_property("GPU", ov::available_devices);
std::string all_devices;
for (size_t i = 0; i < GPUDevices.size(); ++i) {
all_devices += std::string("GPU.")
+ GPUDevices[i]
+ std::string(i < (GPUDevices.size() -1) ? "," : "");
}
ov::CompiledModel compileModel = core.compile_model("sample.xml", "MULTI",
ov::device::priorities(all_devices));