Inference Device Support

OpenVINO™ Runtime can infer deep learning models using the following device types:

For a more detailed list of hardware, see Supported Devices

Devices similar to the ones used for benchmarking can be accessed, using Intel® DevCloud for the Edge, a remote development environment with access to Intel® hardware and the latest versions of the Intel® Distribution of the OpenVINO™ Toolkit. Learn more or Register here.

Feature Support Matrix

The table below demonstrates support of key features by OpenVINO device plugins.

Capability

CPU

GPU

GNA

Arm® CPU

Heterogeneous execution

Yes

Yes

No

Yes

Multi-device execution

Yes

Yes

Partial

Yes

Automatic batching

No

Yes

No

No

Multi-stream execution

Yes

Yes

No

Yes

Models caching

Yes

Partial

Yes

No

Dynamic shapes

Yes

Partial

No

No

Import/Export

Yes

No

Yes

No

Preprocessing acceleration

Yes

Yes

No

Partial

Stateful models

Yes

No

Yes

No

Extensibility

Yes

Yes

No

No

For more details on plugin-specific feature limitations, see the corresponding plugin pages.

Enumerating Available Devices

The OpenVINO Runtime API features dedicated methods of enumerating devices and their capabilities. See the Hello Query Device C++ Sample. This is an example output from the sample (truncated to device names only):

./hello_query_device
Available devices:
    Device: CPU
...
    Device: GPU.0
...
    Device: GPU.1
...
    Device: GNA

A simple programmatic way to enumerate the devices and use with the multi-device is as follows:

ov::Core core;
std::shared_ptr<ov::Model> model = core.read_model("sample.xml");
std::vector<std::string> availableDevices = core.get_available_devices();
std::string all_devices;
for (auto && device : availableDevices) {
    all_devices += device;
    all_devices += ((device == availableDevices[availableDevices.size()-1]) ? "" : ",");
}
ov::CompiledModel compileModel = core.compile_model(model, "MULTI",
    ov::device::priorities(all_devices));

Beyond the typical “CPU”, “GPU”, and so on, when multiple instances of a device are available, the names are more qualified. For example, this is how two Intel® Movidius™ Myriad™ X sticks are listed with the hello_query_sample:

...
    Device: MYRIAD.1.2-ma2480
...
    Device: MYRIAD.1.4-ma2480

So, the explicit configuration to use both would be “MULTI:MYRIAD.1.2-ma2480,MYRIAD.1.4-ma2480”. Accordingly, the code that loops over all available devices of the “MYRIAD” type only is as follows:

ov::Core core;
std::vector<std::string> myriadDevices = core.get_property("MYRIAD", ov::available_devices);
std::string all_devices;
for (size_t i = 0; i < myriadDevices.size(); ++i) {
    all_devices += std::string("MYRIAD.")
                            + myriadDevices[i]
                            + std::string(i < (myriadDevices.size() -1) ? "," : "");
}
ov::CompiledModel compileModel = core.compile_model("sample.xml", "MULTI",
    ov::device::priorities(all_devices));