Supported Devices

The OpenVINO runtime can infer various models of different input and output formats. Here, you can find configurations supported by OpenVINO devices, which are CPU, GPU, and GNA (Gaussian Neural Accelerator coprocessor). Currently, processors of the 11th generation and later (up to the 13th generation at the moment) provide a further performance boost, especially with INT8 models.

Note

With OpenVINO™ 2023.0 release, support has been cancelled for all VPU accelerators based on Intel® Movidius™.

OpenVINO Device

Supported Hardware

(x86)



(Arm®)

Intel® Xeon® with Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector Extensions 512 (Intel® AVX-512), Intel® Advanced Matrix Extensions (Intel® AMX), Intel® Core™ Processors with Intel® AVX2, Intel® Atom® Processors with Intel® Streaming SIMD Extensions (Intel® SSE)

Raspberry Pi™ 4 Model B, Apple® Mac mini with M1 chip, NVIDIA® Jetson Nano™, Android™ devices


Intel® Processor Graphics including Intel® HD Graphics and Intel® Iris® Graphics, Intel® Arc™ A-Series Graphics, Intel® Data Center GPU Flex Series, Intel® Data Center GPU Max Series

(available in the Intel® Distribution of OpenVINO™ toolkit)







Intel® Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel® Pentium® Silver J5005 Processor, Intel® Pentium® Silver N5000 Processor, Intel® Celeron® J4005 Processor, Intel® Celeron® J4105 Processor, Intel® Celeron® Processor N4100, Intel® Celeron® Processor N4000, Intel® Core™ i3-8121U Processor, Intel® Core™ i7-1065G7 Processor, Intel® Core™ i7-1060G7 Processor, Intel® Core™ i5-1035G4 Processor, Intel® Core™ i5-1035G7 Processor, Intel® Core™ i5-1035G1 Processor, Intel® Core™ i5-1030G7 Processor, Intel® Core™ i5-1030G4 Processor, Intel® Core™ i3-1005G1 Processor, Intel® Core™ i3-1000G1 Processor, Intel® Core™ i3-1000G4 Processor

Beside inference using a specific device, OpenVINO offers three inference modes for automated inference management. These are:

  • Automatic Device Selection - automatically selects the best device available for the given task. It offers many additional options and optimizations, including inference on multiple devices at the same time.

  • Multi-device Inference - executes inference on multiple devices. Currently, this mode is considered a legacy solution. Using Automatic Device Selection is advised.

  • Heterogeneous Inference - enables splitting inference among several devices automatically, for example, if one device doesn’t support certain operations.

Devices similar to the ones we have used for benchmarking can be accessed using Intel® DevCloud for the Edge, a remote development environment with access to Intel® hardware and the latest versions of the Intel® Distribution of OpenVINO™ Toolkit. Learn more or Register here.

To learn more about each of the supported devices and modes, refer to the sections of: * Inference Device Support * Inference Modes

For setting relevant configuration, refer to the Integrate with Customer Application topic (step 3 “Configure input and output”).