Supported Devices

OpenVINO enables you to implement its inference capabilities in your own software, utilizing various hardware. It currently supports the following processing units (for more details, see system requirements):

Note

GNA, currently available in the Intel® Distribution of OpenVINO™ toolkit, will be deprecated together with the hardware being discontinued in future CPU solutions.

With OpenVINO™ 2023.0 release, support has been cancelled for: - Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X - Intel® Vision Accelerator Design with Intel® Movidius™

To keep using the MYRIAD and HDDL plugins with your hardware, revert to the OpenVINO 2022.3 LTS release.

Beside running inference with a specific device, OpenVINO offers automated inference management with the following inference modes:

  • Automatic Device Selection - automatically selects the best device available for the given task. It offers many additional options and optimizations, including inference on multiple devices at the same time.

  • Multi-device Inference - executes inference on multiple devices. Currently, this mode is considered a legacy solution. Using Automatic Device Selection is advised.

  • Heterogeneous Inference - enables splitting inference among several devices automatically, for example, if one device doesn’t support certain operations.

Devices similar to the ones used for benchmarking can be accessed using Intel® DevCloud for the Edge, a remote development environment with access to Intel® hardware and the latest versions of the Intel® Distribution of OpenVINO™ Toolkit. Learn more or Register here.

To learn more about each of the supported devices and modes, refer to the sections of: * Inference Device Support * Inference Modes

For setting up a relevant configuration, refer to the Integrate with Customer Application topic (step 3 “Configure input and output”).