Automatic Speech Recognition Python Sample

Note

This sample is being deprecated and will no longer be maintained after OpenVINO 2023.2 (LTS). The main reason for it is the outdated state of the sample and its extensive usage of GNA, which is not going to be supported by OpenVINO beyond 2023.2.

This sample demonstrates how to do a Synchronous Inference of acoustic model based on Kaldi* neural models and speech feature vectors.

The sample works with Kaldi ARK or Numpy* uncompressed NPZ files, so it does not cover an end-to-end speech recognition scenario (speech to text), requiring additional preprocessing (feature extraction) to get a feature vector from a speech signal, as well as postprocessing (decoding) to produce text from scores.

Options

Values

Validated Models

Acoustic model based on Kaldi* neural models (see Model Preparation section)

Model Format

OpenVINO™ toolkit Intermediate Representation (.xml + .bin)

Supported devices

See Execution Modes section below and List Supported Devices

Other language realization

C++

Automatic Speech Recognition Python sample application demonstrates how to use the following Python API in applications:

Feature

API

Description

Import/Export Model

openvino.runtime.Core.import_model , openvino.runtime.CompiledModel.export_model

The GNA plugin supports loading and saving of the GNA-optimized model

Model Operations

openvino.runtime.Model.add_outputs , openvino.runtime.set_batch , openvino.runtime.CompiledModel.inputs , openvino.runtime.CompiledModel.outputs , openvino.runtime.ConstOutput.any_name

Managing of model: configure batch_size, input and output tensors

Synchronous Infer

openvino.runtime.CompiledModel.create_infer_request , openvino.runtime.InferRequest.infer

Do synchronous inference

InferRequest Operations

openvino.runtime.InferRequest.get_input_tensor , openvino.runtime.InferRequest.model_outputs , openvino.runtime.InferRequest.model_inputs ,

Get info about model using infer request API

InferRequest Operations

openvino.runtime.InferRequest.query_state , openvino.runtime.VariableState.reset

Gets and resets CompiledModel state control

Profiling

openvino.runtime.InferRequest.profiling_info , openvino.runtime.ProfilingInfo.real_time

Get infer request profiling info

Basic OpenVINO™ Runtime API is covered by Hello Classification Python* Sample.

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

import sys
from io import BytesIO
from timeit import default_timer
from typing import Dict

import numpy as np
import openvino as ov

from arg_parser import parse_args
from file_options import read_utterance_file, write_utterance_file
from utils import (GNA_ATOM_FREQUENCY, GNA_CORE_FREQUENCY,
                   calculate_scale_factor, compare_with_reference,
                   get_input_layouts, get_sorted_scale_factors, log,
                   set_scale_factors)


def do_inference(data: Dict[str, np.ndarray], infer_request: ov.InferRequest, cw_l: int = 0, cw_r: int = 0) -> np.ndarray:
    """Do a synchronous matrix inference."""
    frames_to_infer = {}
    result = {}

    batch_size = infer_request.model_inputs[0].shape[0]
    num_of_frames = next(iter(data.values())).shape[0]

    for output in infer_request.model_outputs:
        result[output.any_name] = np.ndarray((num_of_frames, np.prod(tuple(output.shape)[1:])))

    for i in range(-cw_l, num_of_frames + cw_r, batch_size):
        if i < 0:
            index = 0
        elif i >= num_of_frames:
            index = num_of_frames - 1
        else:
            index = i

        for _input in infer_request.model_inputs:
            frames_to_infer[_input.any_name] = data[_input.any_name][index:index + batch_size]
            num_of_frames_to_infer = len(frames_to_infer[_input.any_name])

            # Add [batch_size - num_of_frames_to_infer] zero rows to 2d numpy array
            # Used to infer fewer frames than the batch size
            frames_to_infer[_input.any_name] = np.pad(
                frames_to_infer[_input.any_name],
                [(0, batch_size - num_of_frames_to_infer), (0, 0)],
            )

            frames_to_infer[_input.any_name] = frames_to_infer[_input.any_name].reshape(_input.tensor.shape)

        frame_results = infer_request.infer(frames_to_infer)

        if i - cw_r < 0:
            continue

        for output in frame_results.keys():
            vector_result = frame_results[output].reshape((batch_size, result[output.any_name].shape[1]))
            result[output.any_name][i - cw_r:i - cw_r + batch_size] = vector_result[:num_of_frames_to_infer]

    return result


def main():
    args = parse_args()

# --------------------------- Step 1. Initialize OpenVINO Runtime Core ------------------------------------------------
    log.info('Creating OpenVINO Runtime Core')
    core = ov.Core()

# --------------------------- Step 2. Read a model --------------------------------------------------------------------
    if args.model:
        log.info(f'Reading the model: {args.model}')
        # (.xml and .bin files) or (.onnx file)
        model = core.read_model(args.model)

# --------------------------- Step 3. Apply preprocessing -------------------------------------------------------------
        model.add_outputs(args.output[0] + args.reference[0])

        if args.layout:
            layouts = get_input_layouts(args.layout, model.inputs)

        ppp = ov.preprocess.PrePostProcessor(model)

        for i in range(len(model.inputs)):
            ppp.input(i).tensor().set_element_type(ov.Type.f32)

            input_name = model.input(i).get_any_name()

            if args.layout and input_name in layouts.keys():
                ppp.input(i).tensor().set_layout(ov.Layout(layouts[input_name]))
                ppp.input(i).model().set_layout(ov.Layout(layouts[input_name]))

        for i in range(len(model.outputs)):
            ppp.output(i).tensor().set_element_type(ov.Type.f32)

        model = ppp.build()

        if args.batch_size:
            batch_size = args.batch_size if args.context_window_left == args.context_window_right == 0 else 1

            if any((not _input.node.layout.empty for _input in model.inputs)):
                ov.set_batch(model, batch_size)
            else:
                log.warning('Layout is not set for any input, so custom batch size is not set')

# ---------------------------Step 4. Configure plugin ---------------------------------------------------------
    devices = args.device.replace('HETERO:', '').split(',')
    plugin_config = {}

    if 'GNA' in args.device:
        gna_device_mode = devices[0] if '_' in devices[0] else 'GNA_AUTO'
        devices[0] = 'GNA'

        plugin_config['GNA_DEVICE_MODE'] = gna_device_mode
        plugin_config['GNA_PRECISION'] = f'I{args.quantization_bits}'
        plugin_config['GNA_EXEC_TARGET'] = args.exec_target
        plugin_config['GNA_PWL_MAX_ERROR_PERCENT'] = str(args.pwl_me)

        # Set a GNA scale factor
        if args.import_gna_model:
            if args.scale_factor[1]:
                log.error(f'Custom scale factor can not be set for imported gna model: {args.import_gna_model}')
                return 1
            else:
                log.info(f'Using scale factor from provided imported gna model: {args.import_gna_model}')
        else:
            if args.scale_factor[1]:
                scale_factors = get_sorted_scale_factors(args.scale_factor, model.inputs)
            else:
                scale_factors = []

                for file_name in args.input[1]:
                    _, utterances = read_utterance_file(file_name)
                    scale_factor = calculate_scale_factor(utterances[0])
                    log.info('Using scale factor(s) calculated from first utterance')
                    scale_factors.append(str(scale_factor))

            set_scale_factors(plugin_config, scale_factors, model.inputs)

        if args.export_embedded_gna_model:
            plugin_config['GNA_FIRMWARE_MODEL_IMAGE'] = args.export_embedded_gna_model
            plugin_config['GNA_FIRMWARE_MODEL_IMAGE_GENERATION'] = args.embedded_gna_configuration

        if args.performance_counter:
            plugin_config['PERF_COUNT'] = 'YES'

    device_str = f'HETERO:{",".join(devices)}' if 'HETERO' in args.device else devices[0]

# --------------------------- Step 5. Loading model to the device -----------------------------------------------------
    log.info('Loading the model to the plugin')
    if args.model:
        compiled_model = core.compile_model(model, device_str, plugin_config)
    else:
        with open(args.import_gna_model, 'rb') as f:
            buf = BytesIO(f.read())
            compiled_model = core.import_model(buf, device_str, plugin_config)

# --------------------------- Exporting GNA model using InferenceEngine AOT API ---------------------------------------
    if args.export_gna_model:
        log.info(f'Writing GNA Model to {args.export_gna_model}')
        user_stream = compiled_model.export_model()
        with open(args.export_gna_model, 'wb') as f:
            f.write(user_stream)
        return 0

    if args.export_embedded_gna_model:
        log.info(f'Exported GNA embedded model to file {args.export_embedded_gna_model}')
        log.info(f'GNA embedded model export done for GNA generation {args.embedded_gna_configuration}')
        return 0

# --------------------------- Step 6. Set up input --------------------------------------------------------------------
    input_layer_names = args.input[0] if args.input[0] else [_input.any_name for _input in compiled_model.inputs]
    input_file_names = args.input[1]

    if len(input_layer_names) != len(input_file_names):
        log.error(f'Number of model inputs ({len(compiled_model.inputs)}) is not equal '
                  f'to number of ark files ({len(input_file_names)})')
        return 3

    input_file_data = [read_utterance_file(file_name) for file_name in input_file_names]

    infer_data = [
        {
            input_layer_names[j]: input_file_data[j].utterances[i]
            for j in range(len(input_file_data))
        }
        for i in range(len(input_file_data[0].utterances))
    ]

    output_layer_names = args.output[0] if args.output[0] else [compiled_model.outputs[0].any_name]
    output_file_names = args.output[1]

    reference_layer_names = args.reference[0] if args.reference[0] else [compiled_model.outputs[0].any_name]
    reference_file_names = args.reference[1]

    reference_file_data = [read_utterance_file(file_name) for file_name in reference_file_names]

    references = [
        {
            reference_layer_names[j]: reference_file_data[j].utterances[i]
            for j in range(len(reference_file_data))
        }
        for i in range(len(input_file_data[0].utterances))
    ]

# --------------------------- Step 7. Create infer request ------------------------------------------------------------
    infer_request = compiled_model.create_infer_request()

# --------------------------- Step 8. Do inference --------------------------------------------------------------------
    log.info('Starting inference in synchronous mode')
    results = []
    total_infer_time = 0

    for i in range(len(infer_data)):
        start_infer_time = default_timer()

        # Reset states between utterance inferences to remove a memory impact
        for state in infer_request.query_state():
            state.reset()

        results.append(do_inference(
            infer_data[i],
            infer_request,
            args.context_window_left,
            args.context_window_right,
        ))

        infer_time = default_timer() - start_infer_time
        total_infer_time += infer_time
        num_of_frames = infer_data[i][input_layer_names[0]].shape[0]
        avg_infer_time_per_frame = infer_time / num_of_frames

# --------------------------- Step 9. Process output ------------------------------------------------------------------
        log.info('')
        log.info(f'Utterance {i}:')
        log.info(f'Total time in Infer (HW and SW): {infer_time * 1000:.2f}ms')
        log.info(f'Frames in utterance: {num_of_frames}')
        log.info(f'Average Infer time per frame: {avg_infer_time_per_frame * 1000:.2f}ms')

        for name in set(reference_layer_names + output_layer_names):
            log.info('')
            log.info(f'Output layer name: {name}')
            log.info(f'Number scores per frame: {results[i][name].shape[1]}')

            if name in references[i].keys():
                log.info('')
                compare_with_reference(results[i][name], references[i][name])

        if args.performance_counter:
            if 'GNA' in args.device:
                total_cycles = infer_request.profiling_info[0].real_time.total_seconds()
                stall_cycles = infer_request.profiling_info[1].real_time.total_seconds()
                active_cycles = total_cycles - stall_cycles
                frequency = 10**6
                if args.arch == 'CORE':
                    frequency *= GNA_CORE_FREQUENCY
                else:
                    frequency *= GNA_ATOM_FREQUENCY
                total_inference_time = total_cycles / frequency
                active_time = active_cycles / frequency
                stall_time = stall_cycles / frequency
                log.info('')
                log.info('Performance Statistics of GNA Hardware')
                log.info(f'   Total Inference Time: {(total_inference_time * 1000):.4f} ms')
                log.info(f'   Active Time: {(active_time * 1000):.4f} ms')
                log.info(f'   Stall Time:  {(stall_time * 1000):.4f} ms')

    log.info('')
    log.info(f'Total sample time: {total_infer_time * 1000:.2f}ms')

    for i in range(len(output_file_names)):
        log.info(f'Saving results from "{output_layer_names[i]}" layer to {output_file_names[i]}')
        data = [results[j][output_layer_names[i]] for j in range(len(input_file_data[0].utterances))]
        write_utterance_file(output_file_names[i], input_file_data[0].keys, data)

# ----------------------------------------------------------------------------------------------------------------------
    log.info('This sample is an API example, '
             'for any performance measurements please use the dedicated benchmark_app tool\n')
    return 0


if __name__ == '__main__':
    sys.exit(main())

How It Works

At startup, the sample application reads command-line parameters, loads a specified model and input data to the OpenVINO™ Runtime plugin, performs synchronous inference on all speech utterances stored in the input file, logging each step in a standard output stream.

You can see the explicit description of each sample step at Integration Steps section of “Integrate OpenVINO™ Runtime with Your Application” guide.

GNA-specific details

Quantization

If the GNA device is selected (for example, using the -d GNA flag), the GNA OpenVINO™ Runtime plugin quantizes the model and input feature vector sequence to integer representation before performing inference.

Several neural model quantization modes:

  • static - The first utterance in the input file is scanned for dynamic range. The scale factor (floating point scalar multiplier) required to scale the maximum input value of the first utterance to 16384 (15 bits) is used for all subsequent inputs. The neural model is quantized to accommodate the scaled input dynamic range.

  • user-defined - The user may specify a scale factor via the -sf flag that will be used for static quantization.

The -qb flag provides a hint to the GNA plugin regarding the preferred target weight resolution for all layers. For example, when -qb 8 is specified, the plugin will use 8-bit weights wherever possible in the model.

Note

It is not always possible to use 8-bit weights due to GNA hardware limitations. For example, convolutional layers always use 16-bit weights (GNA hardware version 1 and 2). This limitation will be removed in GNA hardware version 3 and higher.

Execution Modes

Several execution modes are supported via the -d flag:

  • CPU - All calculations are performed on CPU device using CPU Plugin.

  • GPU - All calculations are performed on GPU device using GPU Plugin.

  • NPU - All calculations are performed on NPU device using NPU Plugin.

  • GNA_AUTO - GNA hardware is used if available and the driver is installed. Otherwise, the GNA device is emulated in fast-but-not-bit-exact mode.

  • GNA_HW - GNA hardware is used if available and the driver is installed. Otherwise, an error will occur.

  • GNA_SW - Deprecated. The GNA device is emulated in fast-but-not-bit-exact mode.

  • GNA_SW_FP32 - Substitutes parameters and calculations from low precision to floating point (FP32).

  • GNA_SW_EXACT - GNA device is emulated in bit-exact mode.

Loading and Saving Models

The GNA plugin supports loading and saving of the GNA-optimized model (non-IR) via the -rg and -wg flags. Thereby, it is possible to avoid the cost of full model quantization at run time. The GNA plugin also supports export of firmware-compatible embedded model images for the Intel® Speech Enabling Developer Kit and Amazon Alexa* Premium Far-Field Voice Development Kit via the -we flag (save only).

In addition to performing inference directly from a GNA model file, these options make it possible to:

  • Convert from IR format to GNA format model file (-m, -wg)

  • Convert from IR format to embedded format model file (-m, -we)

  • Convert from GNA format to embedded format model file (-rg, -we)

Running

Run the application with the -h option to see the usage message:

python speech_sample.py -h

Usage message:

usage: speech_sample.py [-h] (-m MODEL | -rg IMPORT_GNA_MODEL) -i INPUT [-o OUTPUT] [-r REFERENCE] [-d DEVICE] [-bs [1-8]]
                        [-layout LAYOUT] [-qb [8, 16]] [-sf SCALE_FACTOR] [-wg EXPORT_GNA_MODEL]
                        [-we EXPORT_EMBEDDED_GNA_MODEL] [-we_gen [GNA1, GNA3]]
                        [--exec_target [GNA_TARGET_2_0, GNA_TARGET_3_0]] [-pc] [-a [CORE, ATOM]] [-iname INPUT_LAYERS]
                        [-oname OUTPUT_LAYERS] [-cw_l CONTEXT_WINDOW_LEFT] [-cw_r CONTEXT_WINDOW_RIGHT] [-pwl_me PWL_ME]

optional arguments:
  -m MODEL, --model MODEL
                        Path to an .xml file with a trained model (required if -rg is missing).
  -rg IMPORT_GNA_MODEL, --import_gna_model IMPORT_GNA_MODEL
                        Read GNA model from file using path/filename provided (required if -m is missing).

Options:
  -h, --help            Show this help message and exit.
  -i INPUT, --input INPUT
                        Required. Path(s) to input file(s).
                        Usage for a single file/layer: <input_file.ark> or <input_file.npz>.
                        Example of usage for several files/layers: <layer1>:<port_num1>=<input_file1.ark>,<layer2>:<port_num2>=<input_file2.ark>.
  -o OUTPUT, --output OUTPUT
                        Optional. Output file name(s) to save scores (inference results).
                        Usage for a single file/layer: <output_file.ark> or <output_file.npz>.
                        Example of usage for several files/layers: <layer1>:<port_num1>=<output_file1.ark>,<layer2>:<port_num2>=<output_file2.ark>.
  -r REFERENCE, --reference REFERENCE
                        Read reference score file(s) and compare inference results with reference scores.
                        Usage for a single file/layer: <reference_file.ark> or <reference_file.npz>.
                        Example of usage for several files/layers: <layer1>:<port_num1>=<reference_file1.ark>,<layer2>:<port_num2>=<reference_file2.ark>.
  -d DEVICE, --device DEVICE
                        Optional. Specify a target device to infer on. CPU, GPU, NPU, GNA_AUTO, GNA_HW, GNA_SW_FP32,
                        GNA_SW_EXACT and HETERO with combination of GNA as the primary device and CPU as a secondary (e.g.
                        HETERO:GNA,CPU) are supported. The sample will look for a suitable plugin for device specified.
                        Default value is CPU.
  -bs [1-8], --batch_size [1-8]
                        Optional. Batch size 1-8.
  -layout LAYOUT        Optional. Custom layout in format: "input0[value0],input1[value1]" or "[value]" (applied to all
                        inputs)
  -qb [8, 16], --quantization_bits [8, 16]
                        Optional. Weight resolution in bits for GNA quantization: 8 or 16 (default 16).
  -sf SCALE_FACTOR, --scale_factor SCALE_FACTOR
                        Optional. User-specified input scale factor for GNA quantization.
                        If the model contains multiple inputs, provide scale factors by separating them with commas.
                        For example: <layer1>:<sf1>,<layer2>:<sf2> or just <sf> to be applied to all inputs.
  -wg EXPORT_GNA_MODEL, --export_gna_model EXPORT_GNA_MODEL
                        Optional. Write GNA model to file using path/filename provided.
  -we EXPORT_EMBEDDED_GNA_MODEL, --export_embedded_gna_model EXPORT_EMBEDDED_GNA_MODEL
                        Optional. Write GNA embedded model to file using path/filename provided.
  -we_gen [GNA1, GNA3], --embedded_gna_configuration [GNA1, GNA3]
                        Optional. GNA generation configuration string for embedded export. Can be GNA1 (default) or GNA3.
  --exec_target [GNA_TARGET_2_0, GNA_TARGET_3_0]
                        Optional. Specify GNA execution target generation. By default, generation corresponds to the GNA HW
                        available in the system or the latest fully supported generation by the software. See the GNA
                        Plugin's GNA_EXEC_TARGET config option description.
  -pc, --performance_counter
                        Optional. Enables performance report (specify -a to ensure arch accurate results).
  -a [CORE, ATOM], --arch [CORE, ATOM]
                        Optional. Specify architecture. CORE, ATOM with the combination of -pc.
  -cw_l CONTEXT_WINDOW_LEFT, --context_window_left CONTEXT_WINDOW_LEFT
                        Optional. Number of frames for left context windows (default is 0). Works only with context window
                        models. If you use the cw_l or cw_r flag, then batch size argument is ignored.
  -cw_r CONTEXT_WINDOW_RIGHT, --context_window_right CONTEXT_WINDOW_RIGHT
                        Optional. Number of frames for right context windows (default is 0). Works only with context window
                        models. If you use the cw_l or cw_r flag, then batch size argument is ignored.
  -pwl_me PWL_ME        Optional. The maximum percent of error for PWL function. The value must be in <0, 100> range. The
                        default value is 1.0.

Model Preparation

You can use the following model conversion command to convert a Kaldi nnet1 or nnet2 neural model to OpenVINO™ toolkit Intermediate Representation format:

mo --framework kaldi --input_model wsj_dnn5b.nnet --counts wsj_dnn5b.counts --remove_output_softmax --output_dir <OUTPUT_MODEL_DIR>

The following pre-trained models are available:

  • rm_cnn4a_smbr

  • rm_lstm4f

  • wsj_dnn5b_smbr

All of them can be downloaded from the storage <https://storage.openvinotoolkit.org/models_contrib/speech/2021.2>.

Speech Inference

You can do inference on Intel® Processors with the GNA co-processor (or emulation library):

python speech_sample.py -m wsj_dnn5b.xml -i dev93_10.ark -r dev93_scores_10.ark -d GNA_AUTO -o result.npz

Note

  • Before running the sample with a trained model, make sure the model is converted to the intermediate representation (IR) format (*.xml + *.bin) using model conversion API.

  • The sample supports input and output in numpy file format (.npz)

  • Stating flags that take only single option like -m multiple times, for example python classification_sample_async.py -m model.xml -m model2.xml, results in only the last value being used.

Sample Output

The sample application logs each step in a standard output stream.

[ INFO ] Creating OpenVINO Runtime Core
[ INFO ] Reading the model: /models/wsj_dnn5b_smbr_fp32.xml
[ INFO ] Using scale factor(s) calculated from first utterance
[ INFO ] For input 0 using scale factor of 2175.4322418
[ INFO ] Loading the model to the plugin
[ INFO ] Starting inference in synchronous mode
[ INFO ]
[ INFO ] Utterance 0:
[ INFO ] Total time in Infer (HW and SW): 6326.06ms
[ INFO ] Frames in utterance: 1294
[ INFO ] Average Infer time per frame: 4.89ms
[ INFO ]
[ INFO ] Output blob name: affinetransform14
[ INFO ] Number scores per frame: 3425
[ INFO ]
[ INFO ] max error: 0.7051840
[ INFO ] avg error: 0.0448388
[ INFO ] avg rms error: 0.0582387
[ INFO ] stdev error: 0.0371650
[ INFO ]
[ INFO ] Utterance 1:
[ INFO ] Total time in Infer (HW and SW): 4526.57ms
[ INFO ] Frames in utterance: 1005
[ INFO ] Average Infer time per frame: 4.50ms
[ INFO ]
[ INFO ] Output blob name: affinetransform14
[ INFO ] Number scores per frame: 3425
[ INFO ]
[ INFO ] max error: 0.7575974
[ INFO ] avg error: 0.0452166
[ INFO ] avg rms error: 0.0586013
[ INFO ] stdev error: 0.0372769
[ INFO ]
[ INFO ] Utterance 2:
[ INFO ] Total time in Infer (HW and SW): 6636.56ms
[ INFO ] Frames in utterance: 1471
[ INFO ] Average Infer time per frame: 4.51ms
[ INFO ]
[ INFO ] Output blob name: affinetransform14
[ INFO ] Number scores per frame: 3425
[ INFO ]
[ INFO ] max error: 0.7191710
[ INFO ] avg error: 0.0472226
[ INFO ] avg rms error: 0.0612991
[ INFO ] stdev error: 0.0390846
[ INFO ]
[ INFO ] Utterance 3:
[ INFO ] Total time in Infer (HW and SW): 3927.01ms
[ INFO ] Frames in utterance: 845
[ INFO ] Average Infer time per frame: 4.65ms
[ INFO ]
[ INFO ] Output blob name: affinetransform14
[ INFO ] Number scores per frame: 3425
[ INFO ]
[ INFO ] max error: 0.7436461
[ INFO ] avg error: 0.0477581
[ INFO ] avg rms error: 0.0621334
[ INFO ] stdev error: 0.0397457
[ INFO ]
[ INFO ] Utterance 4:
[ INFO ] Total time in Infer (HW and SW): 3891.49ms
[ INFO ] Frames in utterance: 855
[ INFO ] Average Infer time per frame: 4.55ms
[ INFO ]
[ INFO ] Output blob name: affinetransform14
[ INFO ] Number scores per frame: 3425
[ INFO ]
[ INFO ] max error: 0.7071600
[ INFO ] avg error: 0.0449147
[ INFO ] avg rms error: 0.0585048
[ INFO ] stdev error: 0.0374897
[ INFO ]
[ INFO ] Utterance 5:
[ INFO ] Total time in Infer (HW and SW): 3378.61ms
[ INFO ] Frames in utterance: 699
[ INFO ] Average Infer time per frame: 4.83ms
[ INFO ]
[ INFO ] Output blob name: affinetransform14
[ INFO ] Number scores per frame: 3425
[ INFO ]
[ INFO ] max error: 0.8870468
[ INFO ] avg error: 0.0479243
[ INFO ] avg rms error: 0.0625490
[ INFO ] stdev error: 0.0401951
[ INFO ]
[ INFO ] Utterance 6:
[ INFO ] Total time in Infer (HW and SW): 4034.31ms
[ INFO ] Frames in utterance: 790
[ INFO ] Average Infer time per frame: 5.11ms
[ INFO ]
[ INFO ] Output blob name: affinetransform14
[ INFO ] Number scores per frame: 3425
[ INFO ]
[ INFO ] max error: 0.7648273
[ INFO ] avg error: 0.0482702
[ INFO ] avg rms error: 0.0629734
[ INFO ] stdev error: 0.0404429
[ INFO ]
[ INFO ] Utterance 7:
[ INFO ] Total time in Infer (HW and SW): 2854.04ms
[ INFO ] Frames in utterance: 622
[ INFO ] Average Infer time per frame: 4.59ms
[ INFO ]
[ INFO ] Output blob name: affinetransform14
[ INFO ] Number scores per frame: 3425
[ INFO ]
[ INFO ] max error: 0.7389560
[ INFO ] avg error: 0.0465543
[ INFO ] avg rms error: 0.0604941
[ INFO ] stdev error: 0.0386294
[ INFO ]
[ INFO ] Utterance 8:
[ INFO ] Total time in Infer (HW and SW): 2493.28ms
[ INFO ] Frames in utterance: 548
[ INFO ] Average Infer time per frame: 4.55ms
[ INFO ]
[ INFO ] Output blob name: affinetransform14
[ INFO ] Number scores per frame: 3425
[ INFO ]
[ INFO ] max error: 0.6680136
[ INFO ] avg error: 0.0439341
[ INFO ] avg rms error: 0.0574614
[ INFO ] stdev error: 0.0370353
[ INFO ]
[ INFO ] Utterance 9:
[ INFO ] Total time in Infer (HW and SW): 1654.67ms
[ INFO ] Frames in utterance: 368
[ INFO ] Average Infer time per frame: 4.50ms
[ INFO ]
[ INFO ] Output blob name: affinetransform14
[ INFO ] Number scores per frame: 3425
[ INFO ]
[ INFO ] max error: 0.6550579
[ INFO ] avg error: 0.0467643
[ INFO ] avg rms error: 0.0605045
[ INFO ] stdev error: 0.0383914
[ INFO ]
[ INFO ] Total sample time: 39722.60ms
[ INFO ] File result.npz was created!
[ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool