Post-Training Quantization of OpenAI Whisper model with NNCF

This tutorial is also available as a Jupyter notebook that can be cloned directly from GitHub. See the installation guide for instructions to run this tutorial locally on Windows, Linux or macOS.


The goal of this tutorial is to demonstrate how to speed up the model by applying 8-bit post-training quantization from NNCF (Neural Network Compression Framework) and infer quantized model via OpenVINO™ Toolkit. The optimization process contains the following steps:

  1. Quantize the converted OpenVINO model from 227-whisper-convert notebook with NNCF.

  2. Check model result for the demo video.

  3. Compare model size, performance and accuracy of FP32 and quantized INT8 models.

NOTE: you should run 227-whisper-convert notebook first to generate OpenVINO IR model that is used for quantization.

Table of contents:


Install dependencies.

%pip install -q "openvino>=2023.1.0"
%pip install -q "nncf>=2.6.0"
%pip install -q datasets librosa soundfile
%pip install -q evaluate jiwer

Select device from dropdown list for running inference using OpenVINO.

import ipywidgets as widgets

from openvino import Core
core = Core()

device = widgets.Dropdown(
    options=core.available_devices + ["AUTO"],

Dropdown(description='Device:', index=4, options=('CPU', 'GPU.0', 'GPU.1', 'GPU.2', 'AUTO'), value='AUTO')

Select the task for the model:

  • transcribe - generate audio transcription in the source language (automatically detected).

  • translate - generate audio transcription with translation to English language.

task = widgets.Select(
    options=["transcribe", "translate"],
    description="Select task:",
Select(description='Select task:', index=1, options=('transcribe', 'translate'), value='translate')

Create and initialize quantization

NNCF enables post-training quantization by adding the quantization layers into the model graph and then using a subset of the training dataset to initialize the parameters of these additional quantization layers. The framework is designed so that modifications to your original training code are minor. Quantization is the simplest scenario and requires a few modifications.

The optimization process contains the following steps:

  1. Create a calibration dataset for quantization.

  2. Run nncf.quantize to obtain quantized models.

  3. Serialize the INT8 model using openvino.runtime.serialize function.

Set paths to the model converted in 227-whisper-convert notebook and the paths where quantized models will be saved.

from pathlib import Path

WHISPER_ENCODER_OV = Path("whisper_encoder.xml")
WHISPER_DECODER_OV = Path("whisper_decoder.xml")

WHISPER_ENCODER_OV_INT8 = Path("whisper_encoder_int8.xml")
WHISPER_DECODER_OV_INT8 = Path("whisper_decoder_int8.xml")

Load FP32 model IR.

import whisper
from utils import patch_whisper_for_ov_inference, OpenVINOAudioEncoder, OpenVINOTextDecoder

model_id = "base"
model_fp32 = whisper.load_model(model_id).to("cpu").eval()

model_fp32.encoder = OpenVINOAudioEncoder(core, WHISPER_ENCODER_OV, device=device.value)
model_fp32.decoder = OpenVINOTextDecoder(core, WHISPER_DECODER_OV, device=device.value)

Prepare calibration datasets

Whisper consists of an encoder and a decoder models. We need to collect calibration data for both of them.

Below we overwrite encoder/decoder forward methods in order to collect calibration samples.

from contextlib import contextmanager
from functools import partial
import openvino as ov
from typing import Optional
import torch

encoder_calibration_data = []
decoder_calibration_data = []

def calibration_data_collection():

def encoder_forward(self, mel: torch.Tensor):
    return torch.from_numpy(self.compiled_model(mel)[self.output_blob])

def decoder_forward(self, x: torch.Tensor, xa: torch.Tensor, kv_cache: Optional[dict] = None):
    feed_dict = {'x': ov.Tensor(x.numpy()), 'xa': ov.Tensor(xa.numpy())}
    feed_dict = (self.preprocess_kv_cache_inputs(feed_dict, kv_cache))
    res = self.compiled_model(feed_dict)
    return self.postprocess_outputs(res)

model_fp32.encoder.forward = partial(encoder_forward, model_fp32.encoder)
model_fp32.decoder.forward = partial(decoder_forward, model_fp32.decoder)

We use a portion of validation librispeech_asr dataset from Hugging Face as calibration data.

from datasets import load_dataset
from tqdm.notebook import tqdm


calibration_dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True).take(CALIBRATION_DATASET_SIZE)

with calibration_data_collection():
    for data_item in tqdm(calibration_dataset, desc="Collecting calibration data", total=CALIBRATION_DATASET_SIZE):
        model_fp32.transcribe(data_item["audio"]["array"].astype("float32"), task=task.value)
Collecting calibration data:   0%|          | 0/30 [00:00<?, ?it/s]

Quantize Whisper encoder and decoder models

Quantize both encoder and decoder models using nncf.quantize() API and save the quantized IRs after that.

import nncf
from openvino.runtime import serialize

print("Quantizing encoder...")
quantized_encoder = nncf.quantize(
        smooth_quant_alpha=0.5      # Smooth Quant algorithm reduces activation quantization error; optimal alpha value was obtained through grid search
serialize(quantized_encoder, WHISPER_ENCODER_OV_INT8)
print(f"Saved quantized encoder at ./{WHISPER_ENCODER_OV_INT8}")

print("Quantizing decoder...")
quantized_decoder = nncf.quantize(
        smooth_quant_alpha=0.95     # Smooth Quant algorithm reduces activation quantization error; optimal alpha value was obtained through grid search
serialize(quantized_decoder, WHISPER_DECODER_OV_INT8)
print(f"Saved quantized decoder at ./{WHISPER_DECODER_OV_INT8}")
INFO:nncf:NNCF initialized successfully. Supported frameworks detected: torch, tensorflow, onnx, openvino
Quantizing encoder...
2023-08-30 19:38:10.314501: I tensorflow/core/util/] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0.
2023-08-30 19:38:10.347770: I tensorflow/core/platform/] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-08-30 19:38:10.917857: W tensorflow/compiler/tf2tensorrt/utils/] TF-TRT Warning: Could not find TensorRT
Statistics collection: 100%|████████████████████████████████████████████████████████████████████████████████████████████████| 60/60 [00:04<00:00, 12.26it/s]
Applying Smooth Quant: 100%|████████████████████████████████████████████████████████████████████████████████████████████████| 24/24 [00:00<00:00, 60.29it/s]
INFO:nncf:18 ignored nodes was found by name in the NNCFGraph
Statistics collection: 100%|████████████████████████████████████████████████████████████████████████████████████████████████| 60/60 [00:14<00:00,  4.14it/s]
Applying Fast Bias correction: 100%|████████████████████████████████████████████████████████████████████████████████████████| 32/32 [00:06<00:00,  5.22it/s]
Saved quantized encoder at ./whisper_encoder_int8.xml
Quantizing decoder...
Statistics collection: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 664/664 [00:12<00:00, 54.92it/s]
Applying Smooth Quant: 100%|████████████████████████████████████████████████████████████████████████████████████████████████| 38/38 [00:00<00:00, 39.37it/s]
INFO:nncf:36 ignored nodes was found by name in the NNCFGraph
Statistics collection: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 664/664 [00:34<00:00, 19.20it/s]
Applying Fast Bias correction: 100%|████████████████████████████████████████████████████████████████████████████████████████| 48/48 [00:07<00:00,  6.30it/s]
Saved quantized decoder at ./whisper_decoder_int8.xml

Transcribe video with quantized OpenVINO model

Load INT8 models saved above into a new instance of Whisper model.

model_int8 = whisper.load_model(model_id).to("cpu").eval()

model_int8.encoder = OpenVINOAudioEncoder(core, WHISPER_ENCODER_OV_INT8, device=device.value)
model_int8.decoder = OpenVINOTextDecoder(core, WHISPER_DECODER_OV_INT8, device=device.value)

Select a video for transcription as in 227-whisper-convert notebook.

link = widgets.Text(
    placeholder="Type link for video",
Text(value='', description='Video:', placeholder='Type link for video')
from pytube import YouTube

print(f"Downloading video {link.value} started")

output_file = Path("downloaded_video.mp4")
yt = YouTube(link.value)
print(f"Video saved to {output_file}")
Downloading video started
Video saved to downloaded_video.mp4
from utils import get_audio

audio = get_audio(output_file)

Run transcription by the quantized model.

transcription = model_int8.transcribe(audio, task=task.value)
from utils import prepare_srt

srt_lines = prepare_srt(transcription)
# save transcription
with output_file.with_suffix(".srt").open("w") as f:

Now let us see the results.

widgets.Video.from_file(output_file, loop=False, width=800, height=800)
Video(value=b'x00x00x00x18ftypmp42x00x00x00x00isommp42x00x00Aimoovx00x00x00lmvhd...', height='800…
00:00:00,000 --> 00:00:07,000
 What's that? Oh, wow.

00:00:09,000 --> 00:00:11,000
 Hello humans.

00:00:14,000 --> 00:00:15,000
 Focus on me.

00:00:15,000 --> 00:00:16,000
 Focus on the guard.

00:00:18,000 --> 00:00:20,000
 Don't tell anyone what you've seen in here.

00:00:22,000 --> 00:00:24,000
 Have you seen what's in there?

00:00:24,000 --> 00:00:25,000
 They have intel.

00:00:25,000 --> 00:00:27,000
 This is where it all changes.

As you can see the result is almost the same.

Compare performance and accuracy of the FP32 and INT8 IRs

Compare model file size.

def calculate_compression_rate(model_path_ov, model_path_ov_int8):
    model_size_fp32 = model_path_ov.with_suffix(".bin").stat().st_size / 1024
    model_size_int8 = model_path_ov_int8.with_suffix(".bin").stat().st_size / 1024
    print(f"Model: {model_path_ov.stem}")
    print(f"    * FP32 IR model size: {model_size_fp32:.2f} KB")
    print(f"    * INT8 IR model size: {model_size_int8:.2f} KB")
    print(f"    * Model compression rate: {model_size_fp32 / model_size_int8:.3f}")

calculate_compression_rate(WHISPER_ENCODER_OV, WHISPER_ENCODER_OV_INT8)
calculate_compression_rate(WHISPER_DECODER_OV, WHISPER_DECODER_OV_INT8)
Model: whisper_encoder
    * FP32 IR model size: 40216.07 KB
    * INT8 IR model size: 21092.37 KB
    * Model compression rate: 1.907
Model: whisper_decoder
    * FP32 IR model size: 101961.09 KB
    * INT8 IR model size: 78058.77 KB
    * Model compression rate: 1.306

To measure the inference performance of the FP32 and INT8 encoder/decoder models, we use median inference time on calibration dataset. So we can approximately estimate the speed-up of the dynamic quantized models.

NOTE: For the most accurate performance estimation, it is recommended to run benchmark_app with static shapes in a terminal/command prompt after closing other applications.

import time
import numpy as np

def calculate_call_inference_time(model, dataset):
    inference_time = []
    for data_item in tqdm(dataset[:100], desc="Measuring performance"):
        start = time.perf_counter()
        end = time.perf_counter()
        delta = end - start
    return np.median(inference_time)

encoder_time_fp32 = calculate_call_inference_time(model_fp32.encoder.compiled_model, encoder_calibration_data)
encoder_time_int8 = calculate_call_inference_time(model_int8.encoder.compiled_model, encoder_calibration_data)
print(f"Encoder performance speedup: {encoder_time_fp32 / encoder_time_int8:.3f}")

decoder_time_fp32 = calculate_call_inference_time(model_fp32.decoder.compiled_model, decoder_calibration_data)
decoder_time_int8 = calculate_call_inference_time(model_int8.decoder.compiled_model, decoder_calibration_data)
print(f"Decoder performance speedup: {decoder_time_fp32 / decoder_time_int8:.3f}")
Measuring performance:   0%|          | 0/60 [00:00<?, ?it/s]
Measuring performance:   0%|          | 0/60 [00:00<?, ?it/s]
Encoder performance speedup: 1.325
Measuring performance:   0%|          | 0/100 [00:00<?, ?it/s]
Measuring performance:   0%|          | 0/100 [00:00<?, ?it/s]
Decoder performance speedup: 1.609

We measure the whole transcription performance separately, because a single Whisper transcribe() call triggers multiple encoder and decoder inference calls. And the number of these calls is dynamic depending on the model accuracy. In this experiment we use the mean time instead of the median because the model transcription time is less uniform.

We also compare accuracy values of the FP32 and INT8 models on a subset of librispeech_asr test dataset. We rely on the Word Error Rate (WER) metric and compute accuracy as (1 - WER).

from evaluate import load
from transformers import WhisperProcessor

wer = load("wer")

test_dataset = load_dataset("librispeech_asr", "clean", split="test", streaming=True).take(TEST_DATASET_SIZE)

def calculate_transcription_time_and_accuracy(model, dataset):
    processor = WhisperProcessor.from_pretrained("openai/whisper-large")

    ground_truths = []
    predictions = []
    inference_time = []
    for data_item in tqdm(dataset, desc="Measuring performance and accuracy", total=TEST_DATASET_SIZE):
        audio = data_item["audio"]["array"].astype("float32")

        start_time = time.perf_counter()
        transcription = model.transcribe(audio, task=task.value)
        end_time = time.perf_counter()
        delta_time = end_time - start_time

        reference = processor.tokenizer._normalize(data_item["text"])
        prediction = processor.tokenizer._normalize(transcription["text"])

    word_accuracy = (1 - wer.compute(references=ground_truths, predictions=predictions)) * 100
    mean_inference_time = np.mean(inference_time)
    return mean_inference_time, word_accuracy

transcription_time_fp32, accuracy_fp32 = calculate_transcription_time_and_accuracy(model_fp32, test_dataset)
transcription_time_int8, accuracy_int8 = calculate_transcription_time_and_accuracy(model_int8, test_dataset)
print(f"Whisper transcription performance speedup: {transcription_time_fp32 / transcription_time_int8:.3f}")
print(f"Whisper transcription word accuracy. FP32: {accuracy_fp32:.2f}%. INT8: {accuracy_int8:.2f}%. Accuracy drop :{accuracy_fp32 - accuracy_int8:.2f}%.")
Measuring performance and accuracy:   0%|          | 0/100 [00:00<?, ?it/s]
Measuring performance and accuracy:   0%|          | 0/100 [00:00<?, ?it/s]
 Whisper transcription performance speedup: 1.446
 Whisper transcription word accuracy. FP32: 95.61%. INT8: 94.23%. Accuracy drop :1.38%.

NOTE: Accuracy drop can generally be improved by increasing
calibration dataset size.