Arm® CPU Device

Introducing the Arm® CPU Plugin

The Arm® CPU plugin is developed in order to enable deep neural networks inference on Arm® CPU, using Compute Library as a backend.

Note

This is a community-level add-on to OpenVINO™. Intel® welcomes community participation in the OpenVINO™ ecosystem, technical questions and code contributions on community forums. However, this component has not undergone full release validation or qualification from Intel®, hence no official support is offered.

The set of supported layers and their limitations are defined on the Op-set specification page.

Supported Inference Data Types

The Arm® CPU plugin supports the following data types as inference precision of internal primitives:

  • Floating-point data types:

    • f32

    • f16

  • Quantized data types:

    • i8 (support is experimental)

Hello Query Device C++ Sample can be used to print out supported data types for all detected devices.

Supported Features

Preprocessing Acceleration The Arm® CPU plugin supports the following accelerated preprocessing operations:

  • Precision conversion:

    • u8 -> u16, s16, s32

    • u16 -> u8, u32

    • s16 -> u8, s32

    • f16 -> f32

  • Transposition of tensors with dims < 5

  • Interpolation of 4D tensors with no padding (pads_begin and pads_end equal 0).

The Arm® CPU plugin supports the following preprocessing operations, however they are not accelerated:

  • Precision conversion that is not mentioned above

  • Color conversion:

    • NV12 to RGB

    • NV12 to BGR

    • i420 to RGB

    • i420 to BGR

For more details, see the preprocessing API guide.

Supported Properties

The plugin supports the properties listed below.

Read-write Properties In order to take effect, all parameters must be set before calling ov::Core::compile_model() or passed as additional argument to ov::Core::compile_model()

Read-only Properties