Working with devices¶
The OpenVINO Runtime provides capabilities to infer deep learning models on the following device types with corresponding plugins:
OpenVINO runtime also has several execution capabilities which work on top of other devices:
Capability |
Description |
---|---|
Multi-Device enables simultaneous inference of the same model on several devices in parallel |
|
Auto-Device selection enables selecting Intel device for inference automatically |
|
Heterogeneous execution enables automatic inference splitting between several devices (for example if a device doesn’t support certain operation ) |
|
Auto-Batching plugin enables the batching (on top of the specified device) that is completely transparent to the application |
Devices similar to the ones we have used for benchmarking can be accessed using Intel® DevCloud for the Edge, a remote development environment with access to Intel® hardware and the latest versions of the Intel® Distribution of the OpenVINO™ Toolkit. Learn more or Register here.
Features Support Matrix¶
The table below demonstrates support of key features by OpenVINO device plugins.
Capability |
||||
---|---|---|---|---|
Yes |
Yes |
No |
Yes |
|
Yes |
Yes |
Partial |
Yes |
|
No |
Yes |
No |
No |
|
Yes |
Yes |
No |
Yes |
|
Yes |
Partial |
Yes |
No |
|
Yes |
Partial |
No |
No |
|
Yes |
No |
Yes |
No |
|
Yes |
Yes |
No |
Partial |
|
Yes |
No |
Yes |
No |
|
Yes |
Yes |
No |
No |
For more details on plugin specific feature limitation, see corresponding plugin pages.