Quantizing with Accuracy Control¶
This is the advanced quantization flow that allows to apply 8-bit quantization to the model with control of accuracy metric. This is achieved by keeping the most impactful operations within the model in the original precision. The flow is based on the Basic 8-bit quantization and has the following differences:
Besides the calibration dataset, a validation dataset is required to compute the accuracy metric. Both datasets can refer to the same data in the simplest case.
Validation function, used to compute accuracy metric is required. It can be a function that is already available in the source framework or a custom function.
Since accuracy validation is run several times during the quantization process, quantization with accuracy control can take more time than the Basic 8-bit quantization flow.
The resulted model can provide smaller performance improvement than the Basic 8-bit quantization flow because some of the operations are kept in the original precision.
Currently, 8-bit quantization with accuracy control is available only for models in OpenVINO representation.
The steps for the quantization with accuracy control are described below.
Prepare calibration and validation datasets¶
This step is similar to the Basic 8-bit quantization flow. The only difference is that two datasets, calibration and validation, are required.
import nncf import torch calibration_loader = torch.utils.data.DataLoader(...) def transform_fn(data_item): images, _ = data_item return images calibration_dataset = nncf.Dataset(calibration_loader, transform_fn) validation_dataset = nncf.Dataset(calibration_loader, transform_fn)
Prepare validation function¶
Validation function receives
openvino.runtime.CompiledModel object and validation dataset and returns accuracy metric value. The following code snippet shows an example of validation function for OpenVINO model:
import numpy as np import torch import openvino from sklearn.metrics import accuracy_score def validate(model: openvino.CompiledModel, validation_loader: torch.utils.data.DataLoader) -> float: predictions =  references =  output = model.outputs for images, target in validation_loader: pred = model(images)[output] predictions.append(np.argmax(pred, axis=1)) references.append(target) predictions = np.concatenate(predictions, axis=0) references = np.concatenate(references, axis=0) return accuracy_score(predictions, references)
Run quantization with accuracy control¶
nncf.quantize_with_accuracy_control() function is used to run the quantization with accuracy control. The following code snippet shows an example of quantization with accuracy control for OpenVINO model:
model = ... # openvino.Model object quantized_model = nncf.quantize_with_accuracy_control(model, calibration_dataset=calibration_dataset, validation_dataset=validation_dataset, validation_fn=validate, max_drop=0.01, drop_type=nncf.DropType.ABSOLUTE)
max_dropdefines the accuracy drop threshold. The quantization process stops when the degradation of accuracy metric on the validation dataset is less than the
max_drop. The default value is 0.01. NNCF will stop the quantization and report an error if the
max_dropvalue can’t be reached.
drop_typedefines how the accuracy drop will be calculated:
ABSOLUTE(used by default) or
After that the model can be compiled and run with OpenVINO:
import openvino as ov # compile the model to transform quantized operations to int8 model_int8 = ov.compile_model(quantized_model) input_fp32 = ... # FP32 model input res = model_int8(input_fp32)
To save the model in the OpenVINO Intermediate Representation (IR), use
ov.save_model(). When dealing with an original model in FP32 precision, it’s advisable to preserve FP32 precision in the most impactful model operations that were reverted from INT8 to FP32. To do this, consider using compress_to_fp16=False during the saving process. This recommendation is based on the default functionality of
ov.save_model(), which saves models in FP16, potentially impacting accuracy through this conversion.
# save the model with compress_to_fp16=False to avoid an accuracy drop from compression # of unquantized weights to FP16. This is necessary because # nncf.quantize_with_accuracy_control(...) keeps the most impactful operations within # the model in the original precision to achieve the specified model accuracy ov.save_model(quantized_model, "quantized_model.xml", compress_to_fp16=False)
nncf.quantize_with_accuracy_control() API supports all the parameters from Basic 8-bit quantization API, to quantize a model with accuracy control and a custom configuration.
If the accuracy or performance of the quantized model is not satisfactory, you can try Training-time Optimization as the next step.