Supported Devices#
The OpenVINO™ runtime enables you to use the following devices to run your deep learning models: CPU, GPU, NPU.
Beside running inference with a specific device, OpenVINO offers the option of running automated inference with the following inference modes:
Feature Support and API Coverage#
Supported Feature |
CPU |
GPU |
NPU |
---|---|---|---|
Yes |
Yes |
Partial |
|
Yes |
Yes |
No |
|
No |
Yes |
No |
|
Yes |
Yes |
No |
|
Yes |
Partial |
Yes |
|
Yes |
Partial |
No |
|
Yes |
Yes |
Yes |
|
Yes |
Yes |
No |
|
Yes |
Yes |
Yes |
|
Yes |
Yes |
No |
|
Yes |
Yes |
Partial |
API Coverage: |
plugin |
infer_request |
compiled_model |
---|---|---|---|
CPU |
98.31 % |
100.0 % |
90.7 % |
CPU_ARM |
80.0 % |
100.0 % |
89.74 % |
GPU |
91.53 % |
100.0 % |
100.0 % |
dGPU |
89.83 % |
100.0 % |
100.0 % |
NPU |
18.64 % |
0.0 % |
9.3 % |
AUTO |
93.88 % |
100.0 % |
100.0 % |
BATCH |
86.05 % |
100.0 % |
86.05 % |
HETERO |
61.22 % |
99.24 % |
86.05 % |
Percentage of API supported by the device,
as of OpenVINO 2023.3, 08 Jan, 2024.
|
For setting up a relevant configuration, refer to the Integrate with Customer Application topic (step 3 “Configure input and output”).
Note
With the OpenVINO 2024.0 release, support for GNA has been discontinued. To keep using it in your solutions, revert to the 2023.3 (LTS) version.
With the OpenVINO™ 2023.0 release, support has been cancelled for: - Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X - Intel® Vision Accelerator Design with Intel® Movidius™
To keep using the MYRIAD and HDDL plugins with your hardware, revert to the OpenVINO 2022.3 (LTS) version.