Converting a Kaldi* Model

A summary of the steps for optimizing and deploying a model that was trained with Kaldi*:

  1. Configure the Model Optimizer for Kaldi*.
  2. Convert a Kaldi* Model to produce an optimized Intermediate Representation (IR) of the model based on the trained network topology, weights, and biases values.
  3. Test the model in the Intermediate Representation format using the Inference Engine in the target environment via provided Inference Engine sample applications.
  4. Integrate the Inference Engine in your application to deploy the model in the target environment.

NOTE: The Model Optimizer supports the nnet1 and nnet2 formats of Kaldi models. Support of the nnet3 format is limited.

Supported Topologies

Convert a Kaldi* Model

To convert a Kaldi* model:

  1. Go to the <INSTALL_DIR>/deployment_tools/model_optimizer directory.
  2. Use the script to simply convert a model with the path to the input model .nnet or .mdl file:
    python3 --input_model <INPUT_MODEL>.nnet

Two groups of parameters are available to convert your model:

Using Kaldi*-Specific Conversion Parameters

The following list provides the Kaldi*-specific parameters.

Kaldi-specific parameters:
--counts COUNTS A file name with full path to the counts file
Removes the Softmax that is the output layer
--remove_memory Remove the Memory layer and add new inputs and outputs instead

Examples of CLI Commands

NOTE: Model Optimizer can remove SoftMax layer only if the topology has one output.

NOTE: For sample inference of Kaldi models, you can use the Inference Engine Speech Recognition sample application. The sample supports models with one output. If your model has several outputs, specify the desired one with the --output option.

If you want to convert a model for inference on Intel® Movidius™ Myriad™, use the --remove_memory option. It removes Memory layers from the IR. Instead of it, additional inputs and outputs appear in the IR. The Model Optimizer outputs the mapping between inputs and outputs. For example:

[ WARNING ] Add input/output mapped Parameter_0_for_Offset_fastlstm2.r_trunc__2Offset_fastlstm2.r_trunc__2_out -> Result_for_Offset_fastlstm2.r_trunc__2Offset_fastlstm2.r_trunc__2_out
[ WARNING ] Add input/output mapped Parameter_1_for_Offset_fastlstm2.r_trunc__2Offset_fastlstm2.r_trunc__2_out -> Result_for_Offset_fastlstm2.r_trunc__2Offset_fastlstm2.r_trunc__2_out
[ WARNING ] Add input/output mapped Parameter_0_for_iteration_Offset_fastlstm3.c_trunc__3390 -> Result_for_iteration_Offset_fastlstm3.c_trunc__3390

Based on this mapping, link inputs and outputs in your application manually as follows:

  1. Initialize inputs from the mapping as zeros in the first frame of an utterance.
  2. Copy output blobs from the mapping to the corresponding inputs. For example, data from Result_for_Offset_fastlstm2.r_trunc__2Offset_fastlstm2.r_trunc__2_out must be copied to Parameter_0_for_Offset_fastlstm2.r_trunc__2Offset_fastlstm2.r_trunc__2_out.

Supported Kaldi* Layers

Refer to Supported Framework Layers for the list of supported standard layers.