Quantize Data2Vec Speech Recognition Model using NNCF PTQ API

This Jupyter notebook can be launched after a local installation only.

Github

This tutorial demonstrates how to use the NNCF (Neural Network Compression Framework) 8-bit quantization in post-training mode (without the fine-tuning pipeline) to optimize the speech recognition model, known as Data2Vec for the high-speed inference via OpenVINO™ Toolkit. This notebook uses a fine-tuned data2vec-audio-base-960h PyTorch model trained on the LibriSpeech ASR corpus. The tutorial is designed to be extendable to custom models and datasets. It consists of the following steps:

  • Download and prepare model.

  • Define data loading and accuracy validation functionality.

  • Prepare the model for quantization and quantize.

  • Compare performance of the original and quantized models.

  • Compare Accuracy of the Original and Quantized Models.

Table of contents:

Download and prepare model

data2vec is a framework for self-supervised representation learning for images, speech, and text as described in data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language (Baevski et al., 2022). The algorithm uses the same learning mechanism for different modalities.

pre-trained pipeline

pre-trained pipeline

In our case, we will use data2vec-audio-base-960h model, which was finetuned on 960 hours of audio from LibriSpeech Automatic Speech Recognition corpus and distributed as part of HuggingFace transformers.

Obtain Pytorch model representation

For instantiating PyTorch model class, we should use Data2VecAudioForCTC.from_pretrained method with providing model ID for downloading from HuggingFace hub. Model weights and configuration files will be downloaded automatically in first time usage. Keep in mind that downloading the files can take several minutes and depends on your internet connection.

Additionally, we can create processor class which is responsible for model specific pre- and post-processing steps.

%pip install -q "openvino>=2023.3.0" "nncf>=2.7"
%pip install -q datasets "torchmetrics>=0.11.0" "torch>=2.1.0" --extra-index-url https://download.pytorch.org/whl/cpu
%pip install -q soundfile librosa "transformers>=4.36.2" --extra-index-url https://download.pytorch.org/whl/cpu
from transformers import Wav2Vec2Processor, Data2VecAudioForCTC

processor = Wav2Vec2Processor.from_pretrained("facebook/data2vec-audio-base-960h")
model = Data2VecAudioForCTC.from_pretrained("facebook/data2vec-audio-base-960h")

Convert model to OpenVINO Intermediate Representation

from pathlib import Path

# Set model directory
MODEL_DIR = Path("model")
MODEL_DIR.mkdir(exist_ok=True)
import openvino as ov
import torch

core = ov.Core()

BATCH_SIZE = 1
MAX_SEQ_LENGTH = 30480

ir_model_path = MODEL_DIR / "data2vec-audo-base.xml"

if not ir_model_path.exists():
    ov_model = ov.convert_model(model, example_input=torch.zeros([1, MAX_SEQ_LENGTH], dtype=torch.float))
    ov.save_model(ov_model, str(ir_model_path))
    print("IR model saved to {}".format(ir_model_path))
else:
    print("Read IR model from {}".format(ir_model_path))
    ov_model = core.read_model(ir_model_path)

Prepare inference data

For demonstration purposes, we will use short dummy version of LibriSpeech dataset - patrickvonplaten/librispeech_asr_dummy to speed up model evaluation. Model accuracy can be different from reported in the paper. For reproducing original accuracy, use librispeech_asr dataset.

from datasets import load_dataset

ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")


# define preprocessing function for converting audio to input values for model
def map_to_input(batch):
    preprocessed_signal = processor(
        batch["audio"]["array"],
        return_tensors="pt",
        padding="longest",
        sampling_rate=batch["audio"]["sampling_rate"],
    )
    input_values = preprocessed_signal.input_values
    batch["input_values"] = input_values
    return batch


# apply preprocessing function to dataset and remove audio column, to save memory as we do not need it anymore
dataset = ds.map(map_to_input, batched=False, remove_columns=["audio"])

test_sample = ds[0]["audio"]

Check model inference result

The code below is used for running model inference on a single sample from the dataset. It contains the following steps:

  • Get the input_values tensor as model input.

  • Run model inference and obtain logits.

  • Find logits ids with highest probability, using argmax.

  • Decode predicted token ids, using processor.

For reference, see the same function provided for OpenVINO model.

import numpy as np


# inference function for pytorch
def torch_infer(model, sample):
    logits = model(torch.Tensor(sample["input_values"])).logits
    # take argmax and decode
    predicted_ids = torch.argmax(logits, dim=-1)
    transcription = processor.batch_decode(predicted_ids)
    return transcription


# inference function for openvino
def ov_infer(model, sample):
    output = model.output(0)
    logits = model(np.array(sample["input_values"]))[output]
    predicted_ids = np.argmax(logits, axis=-1)
    transcription = processor.batch_decode(torch.from_numpy(predicted_ids))
    return transcription

Select inference device for OpenVINO

import ipywidgets as widgets

core = ov.Core()
device = widgets.Dropdown(
    options=core.available_devices + ["AUTO"],
    value="AUTO",
    description="Device:",
    disabled=False,
)

device
Dropdown(description='Device:', index=4, options=('CPU', 'GPU.0', 'GPU.1', 'GPU.2', 'AUTO'), value='AUTO')
core = ov.Core()

pt_transcription = torch_infer(model, dataset[0])
compiled_model = core.compile_model(ov_model, device.value)
ov_transcription = ov_infer(compiled_model, dataset[0])
import IPython.display as ipd

print(f"[Reference]:     {dataset[0]['text']}")
print(f"[PyTorch]:       {pt_transcription[0]}")
print(f"[OpenVINO FP16]: {ov_transcription[0]}")
ipd.Audio(test_sample["array"], rate=16000)
[Reference]:     BECAUSE YOU WERE SLEEPING INSTEAD OF CONQUERING THE LOVELY ROSE PRINCESS HAS BECOME A FIDDLE WITHOUT A BOW WHILE POOR SHAGGY SITS THERE A COOING DOVE
[PyTorch]:       BECAUSE YOU WERE SLEEPING INSTEAD OF CONQUERING THE LOVELY RUSE PRINCESS HAS BECOME A FIDDLE WITHOUT A BOW A POOR SHAGGY SITS THERE ACCOOING DOVE
[OpenVINO FP16]: BECAUSE YOU WERE SLEEPING INSTEAD OF CONQUERING THE LOVELY RUSE PRINCESS HAS BECOME A FIDDLE WITHOUT A BOW A POOR SHAGGY SITS THERE ACCOOING DOVE