Converting a Kaldi ASpIRE Chain Time Delay Neural Network (TDNN) Model

At the beginning, you should download a pre-trained model for the ASpIRE Chain Time Delay Neural Network (TDNN) from the Kaldi project official website.

Converting an ASpIRE Chain TDNN Model to IR

Generate the Intermediate Representation of the model by running Model Optimizer with the following parameters:

mo --input_model exp/chain/tdnn_7b/final.mdl --output output

The IR will have two inputs: input for data, and ivector for ivectors.

Example: Running ASpIRE Chain TDNN Model with the Speech Recognition Sample


Before you continue with this part of the article, get familiar with the Speech Recognition sample.

In this example, the input data contains one utterance from one speaker.

To run the ASpIRE Chain TDNN Model with Speech Recognition sample, You need to prepare environment. Do it by following the steps below :

  1. Download a Kaldi repository.

  2. Build it by following instructions in from the repository.

  3. Download the model archive from Kaldi website.

  4. Extract the downloaded model archive to the egs/aspire/s5 folder of the Kaldi repository.

Once everything has been prepared, you can start a proper run:

  1. Prepare the model for decoding. Refer to the README.txt file from the downloaded model archive for instructions.

  2. Convert data and ivectors to .ark format. Refer to the corresponding sections below for instructions.

Preparing Data

If you have a .wav data file, convert it to the .ark format using the following command:

<path_to_kaldi_repo>/src/featbin/compute-mfcc-feats --config=<path_to_kaldi_repo>/egs/aspire/s5/conf/mfcc_hires.conf scp:./wav.scp ark,scp:feats.ark,feats.scp

Add the feats.ark absolute path to feats.scp to avoid errors in later commands.

Preparing Ivectors

Prepare ivectors for the Speech Recognition sample:

  1. Copy the feats.scp file to the egs/aspire/s5/ directory of the built Kaldi repository and navigate there:

    cp feats.scp <path_to_kaldi_repo>/egs/aspire/s5/
    cd <path_to_kaldi_repo>/egs/aspire/s5/
  2. Extract ivectors from the data:

    ./steps/online/nnet2/ --nj 1 --ivector_period <max_frame_count_in_utterance> <data folder> exp/tdnn_7b_chain_online/ivector_extractor <ivector folder>

    You can simplify the preparation of ivectors for the Speech Recognition sample. To do it, specify the maximum number of frames in utterances as a parameter for --ivector_period to get only one ivector per utterance.

To get the maximum number of frames in utterances, use the following command line:

../../../src/featbin/feat-to-len scp:feats.scp ark,t: | cut -d' ' -f 2 - | sort -rn | head -1

As a result, you will find the ivector_online.1.ark file in <ivector folder>.

  1. Go to the <ivector folder> :

    cd <ivector folder>
  2. Convert the ivector_online.1.ark file to text format, using the copy-feats tool. Run the following command:

    <path_to_kaldi_repo>/src/featbin/copy-feats --binary=False ark:ivector_online.1.ark ark,t:ivector_online.1.ark.txt
  3. For the Speech Recognition sample, the .ark file must contain an ivector for each frame. Copy the ivector frame_count times by running the below script in the Python command prompt:

    import subprocess["<path_to_kaldi_repo>/src/featbin/feat-to-len", "scp:<path_to_kaldi_repo>/egs/aspire/s5/feats.scp", "ark,t:feats_length.txt"])
    f = open("ivector_online.1.ark.txt", "r")
    g = open("ivector_online_ie.ark.txt", "w")
    length_file = open("feats_length.txt", "r")
    for line in f:
        if "[" not in line:
            for i in range(frame_count):
                line = line.replace("]", " ")
            frame_count = int(" ")[1])
  4. Create an .ark file from .txt :

    <path_to_kaldi_repo>/src/featbin/copy-feats --binary=True ark,t:ivector_online_ie.ark.txt ark:ivector_online_ie.ark

Running the Speech Recognition Sample

Run the Speech Recognition sample with the created ivector .ark file:

speech_sample -i feats.ark,ivector_online_ie.ark -m final.xml -d CPU -o prediction.ark -cw_l 17 -cw_r 12

Results can be decoded as described in “Use of Sample in Kaldi Speech Recognition Pipeline” in the Speech Recognition Sample description article.