Supported Devices#
The OpenVINO™ runtime enables you to use the following devices to run your deep learning models: CPU, GPU, NPU.
Beside running inference with a specific device, OpenVINO offers the option of running automated inference with the following inference modes:
Feature Support and API Coverage#
Supported Feature |
CPU |
GPU |
NPU |
---|---|---|---|
Yes |
Yes |
Partial |
|
Yes |
Yes |
No |
|
No |
Yes |
No |
|
Yes |
Yes |
No |
|
Yes |
Partial |
Yes |
|
Yes |
Partial |
No |
|
Yes |
Yes |
Yes |
|
Yes |
Yes |
No |
|
Yes |
Yes |
Yes |
|
Yes |
Yes |
No |
API Coverage: |
plugin |
infer_request |
compiled_model |
---|---|---|---|
CPU |
98.31 % |
100.0 % |
90.7 % |
CPU_ARM |
80.0 % |
100.0 % |
89.74 % |
GPU |
91.53 % |
100.0 % |
100.0 % |
dGPU |
89.83 % |
100.0 % |
100.0 % |
NPU |
18.64 % |
0.0 % |
9.3 % |
AUTO |
93.88 % |
100.0 % |
100.0 % |
BATCH |
86.05 % |
100.0 % |
86.05 % |
HETERO |
61.22 % |
99.24 % |
86.05 % |
Percentage of API supported by the device,
as of OpenVINO 2024.5, 20 Nov. 2024.
|
For setting up a relevant configuration, refer to the Integrate with Customer Application topic (step 3 “Configure input and output”).
Device support across OpenVINO 2024.6 distributions
Device |
Archives |
PyPI |
APT/YUM/ZYPPER |
Conda |
Homebrew |
vcpkg |
Conan |
npm |
---|---|---|---|---|---|---|---|---|
CPU |
V |
V |
V |
V |
V |
V |
V |
V |
GPU |
V |
V |
V |
V |
V |
V |
V |
V |
NPU |
V* |
V* |
V* |
n/a |
n/a |
n/a |
n/a |
V* |
Note
With the OpenVINO 2024.0 release, support for GNA has been discontinued. To keep using it in your solutions, revert to the 2023.3 (LTS) version.
With the OpenVINO™ 2023.0 release, support has been cancelled for:
Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X
Intel® Vision Accelerator Design with Intel® Movidius™
To keep using the MYRIAD and HDDL plugins with your hardware, revert to the OpenVINO 2022.3 (LTS) version.