Low-Precision 8-bit Integer Inference

Disclaimer

Inference Engine with low-precision 8-bit integer inference is in a feature preview and requires the following prerequisites to be satisfied:

The 8-bit inference feature was validated on the following topologies:

Introduction

A lot of investigation was made in the field of deep learning with the idea of using low precision computations during inference in order to boost deep learning pipelines and gather higher performance. For example, one of the popular approaches is to shrink the precision of activations and weights values from fp32 precision to smaller ones, for example, to fp11 or int8. For more information about this approach, refer to Brief History of Lower Precision in Deep Learning section in this whitepaper.

8-bit computations (referred to as int8) offer better performance compared to the results of inference in higher precision (for example, fp32), because they allow to load more data into a single processor instruction. Usually the cost for significant boost is a reduced accuracy. However, it is proved that the drop in accuracy can be negligible and depends on task requirements, so that the application engineer can set up the maximum accuracy drop that is acceptable.

Current Inference Engine solution for low-precision inference uses Intel MKL-DNN, which supports inference of the following layers in 8-bit integer computation mode:

This means that 8-bit inference can only be performed with the CPU plugin on the layers listed above. All other layers are executed in the format supported by the CPU plugin: 32-bit floating point format (fp32).

Low-Precision 8-bit Integer Inference Workflow

For 8-bit integer computations, the original model (or its Intermediate Representation) must be in the fp32 format. In order to perform calculation of layers in the int8 format, the input data (input blob) and weights of the given layer (also biases and/or other blobs of the layer) must be quantized - transitioned from fp32 to int8 format. The quantization process converts model input into a lower-precision format. The precision and accuracy factors are specified by the scale and rounding-mode respectively. Read more about mathematical computations under the hood in the white paper.

8-bit inference pipeline includes two stages (also refer to the figure below):

  1. Offline stage, or model calibration. During this stage, scale factors and execution profiles are defined for each layer in a way that low-precision accuracy drop for 8-bit integer inference satisfies the specified threshold. The output of this stage is a calibrated model.
  2. Run-time stage. This stage is an internal procedure of the CPU Plugin. During this stage, the calibrated model is loaded to the plugin. For each layer that obtain the corresponding execution profile, the plugin normalizes the weights (and biases, if present). It also adds scale factors at the particular places of the model defined by the internal algorithm with regards to the maximum performance and minimum number of extra layout manipulations.
cpu_int8_flow.png

Offline Stage: Model Calibration

One of the vital components for successful data quantization is a set of scale factors for each layer that supports 8-bit computations. These scales are obtained from statistics of layers activations collected by the Calibration Tool on a calibration dataset. The calibration dataset contains images and can be a subset of the validation set. A small fraction of images from validation dataset (1-5%) is enough to create a calibration dataset. For more information on the dataset preparation, refer to the Validation Application.

To calibrate a model, the calibration tool preforms the following steps:

  1. Collecting layer statistics (minimum and maximum values of layers activations) and baseline of accuracy metric for fp32 inference. Note that accuracy metric depends on the type of the calibrated model. For classification networks, top-1 metric is used; for object detection models, mAP metric is used.
  2. Collecting accuracy metric for 8-bit inference. During this step, different filters are applied to the collected activations statistics to remove activation outliers (isolated values that are very different from the majority of known values). If the resulting accuracy satisfies the required level with respect to the accepted accuracy drop delta, the Calibration Tool stops the calibration process.
  3. Collecting accuracy drop information on the calibration dataset for each layer that supports 8-bit computations using the Normalized Root-Mean-Square Deviation metric. This metric allows to put all layers in decreasing order so that it is clear which layers bring the biggest accuracy drop.
  4. Eliminating layers with the largest accuracy drop from 8-bit computation by switching them back to fp32 mode. After eliminating one layer, the Calibration Tool computes the accuracy of this configuration. Until the resulting accuracy satisfies the required level with respect to the accepted accuracy drop delta (which equals 1% by default), the tool continues switching layers back to fp32 computations in the order defined in the step 3. However, calibration of the model with all layers returned to fp32 computations is meaningless, so that this plays a role of hard stop of the whole calibration process.

When the calibration completes, the tool writes the resulting statistics and the modified Intermediate Representation (IR) to the .xml file. The tool does not change the IR structure, so the layers hierarchy is the same. However, the layers that are chosen to be executed in 8-bit format are marked with the appropriate profile attribute, and their statistics is stored at the end of the .xml file.

When you pass the calibrated IR to the CPU plugin, the plugin automatically recognizes it as calibrated and performs the 8-bit inference. At the same time, other plugins do not support 8-bit inference, so if you pass the calibrated model to them, statistics and additional attributes are ignored and the model is inferred in the precision that this plugin supports.

Run-Time Stage: Quantization

This is the second stage of the 8-bit integer inference. After you load the calibrated model IR to the CPU plugin, it performs quantization for 8-bit inference:

Performance Counters

Information about layer precision is stored in the performance counters that are available from the Inference Engine API. The layers have the following marks:

For example, the performance counters table for the Inception model can look as follows:

inception_5b/5x5_reduce EXECUTED layerType: Convolution realTime: 417 cpu: 417 execType: gemm_blas_I8
inception_5b/output EXECUTED layerType: Concat realTime: 34 cpu: 34 execType: ref_I8
inception_5b/output_U8_nhw... EXECUTED layerType: Reorder realTime: 33092 cpu: 33092 execType: reorder_I8
inception_5b/output_oScale... EXECUTED layerType: ScaleShift realTime: 1390 cpu: 1390 execType: jit_avx2_FP32
inception_5b/output_oScale... EXECUTED layerType: Reorder realTime: 143 cpu: 143 execType: reorder_FP32
inception_5b/pool EXECUTED layerType: Pooling realTime: 59301 cpu: 59301 execType: ref_any_I8

The execType column of the table includes inference primitives with specific suffixes.

See Also