Model Downloader and other automation tools

Open Model Zoo automation tools contain scripts that automate certain model-related tasks based on configuration files in the models’ directories.

  • Model Downloader: omz_downloader downloads model files from online sources and, if necessary, patches them to make them more usable with Model Optimizer;

  • Model Converter: omz_converter converts the models that are not in the OpenVINO™ IR format into that format using Model Optimizer.

  • Model Quantizer: omz_quantizer quantizes full-precision models in the IR format into low-precision versions using Post-Training Optimization Toolkit.

  • Model Information Dumper: omz_info_dumper prints information about the models in a stable machine-readable format.

  • Datasets’ Data Downloader: omz_data_downloader copies datasets’ data from installed location.

Please use these tools instead of attempting to parse the configuration files directly. Their format is undocumented and may change in incompatible ways in future releases.

Tip

You also can work with the Model Downloader inside the OpenVINO™ Deep Learning Workbench (DL Workbench). DL Workbench is a platform built upon OpenVINO™ and provides a web-based graphical environment that enables you to optimize, fine-tune, analyze, visualize, and compare performance of deep learning models on various Intel® architecture configurations. In the DL Workbench, you can use most of OpenVINO™ toolkit components.

Proceed to an easy installation from Docker to get started.

Installation

Model Downloader and other automation tools can be installed as part of the OpenVINO™ Development Tools Python package or from source if you need the latest changes. To install the tools from the package, go to the OpenVINO™ Development Tools PyPI page and follow the instructions.

To install the tools from source:

  1. Install Python (version 3.6 or higher), setuptools :

  2. Install openvino-dev Python* package to obtain Model Optimizer and Post-Training Optimization Toolkit:

pip install openvino-dev

Note

openvino-dev version should be the same as OMZ Tools. For example, if you are using OMZ Tools for 2021.4.2 then install openvino-dev==2021.4.2.

  1. Install the tools with the following command:

pip install --upgrade pip
pip install .

Note

On Linux and macOS, you may need to type python3 instead of python. You may also need to install pip. For example, on Ubuntu execute the following command to get pip installed: sudo apt install python3-pip. If you are using pip version lower than 21.3, you also need to set OMZ_ROOT variable: export OMZ_ROOT=<omz_dir>

To convert models from certain frameworks, you may also need to install additional dependencies.

python -mpip install --user -r ./requirements-pytorch.in
python -mpip install --user -r ./requirements-tensorflow.in
python -mpip install --user -r ./requirements-paddle.in

Model Downloader Usage

The basic usage is to run the script like this:

omz_downloader --all

This will download all models. The --all option can be replaced with other filter options to download only a subset of models. See the “Shared options” section.

Model Downloader Starting Parameters

See the “Shared options” section for information on other options accepted by the script.

JSON Progress Report Format

This section documents the format of the progress report produced by the script when the --progress_format=json option is specified.

The report consists of a sequence of events, where each event is represented by a line containing a JSON-encoded object. Each event has a member with the name $type whose value determines the type of the event, as well as which additional members it contains.

The following event types are currently defined:

Event type

Additional members

Explanation

model_download_begin

model (string), num_files (integer)

The script started downloading the model named by model. num_files is the number of files that will be downloaded for this model. This event will always be followed by a corresponding model_download_end event.

model_download_end

model (string), successful (boolean)

The script stopped downloading the model named by model. successful is true if every file was downloaded successfully.

model_file_download_begin

model (string), model_file (string), size (integer)

The script started downloading the file named by model_file of the model named by model. size is the size of the file in bytes. This event will always occur between model_download_begin and model_download_end events for the model, and will always be followed by a corresponding model_file_download_end event.

model_file_download_end

model (string), model_file (string), successful (boolean)

The script stopped downloading the file named by model_file of the model named by model. successful is true if the file was downloaded successfully.

model_file_download_progress

model (string), model_file (string), size (integer)

The script downloaded size bytes of the file named by model_file of the model named by model so far. Note that size can decrease in a subsequent event if the download is interrupted and retried. This event will always occur between model_file_download_begin and model_file_download_end events for the file.

model_postprocessing_begin

model

The script started post-download processing on the model named by model. This event will always be followed by a corresponding model_postprocessing_end event.

model_postprocessing_end

model

The script stopped post-download processing on the model named by model.

Additional event types and members may be added in the future.

Tools parsing the machine-readable format should avoid relying on undocumented details. In particular:

  • Tools should not assume that any given event will occur for a given model/file (unless specified otherwise above) or will only occur once.

  • Tools should not assume that events will occur in a certain order beyond the ordering constraints specified above. In particular, when the --jobs option is set to a value greater than 1, event sequences for different files or models may get interleaved.

Model Converter Usage

The basic usage is to run the script like this:

omz_converter --all

This will convert all models into the OpenVINO™ IR format. Models that were originally in that format are ignored. Models in PyTorch format will be converted in ONNX format first.

The --all option can be replaced with other filter options to convert only a subset of models. See the “Shared options” section.

Model Converter Starting Parameters

The script will attempt to locate Model Optimizer using several methods:

  1. If the --mo option was specified, then its value will be used as the path to the script to run:

    omz_converter --all --mo my/openvino/path/model_optimizer/mo.py
  2. Otherwise, if the selected Python executable can find the mo entrypoint, then it will be used.

  3. Otherwise, if the OpenVINO toolkit’s setupvars.sh / setupvars.bat script has been executed, the environment variables set by that script will be used to locate Model Optimizer within the toolkit.

  4. Otherwise, the script will fail.

Model Quantizer Usage

Before you run the model quantizer, you must prepare a directory with the datasets required for the quantization process. This directory will be referred to as <DATASET_DIR> below. You can find more detailed information about dataset preparation in the Dataset Preparation Guide.

The basic usage is to run the script like this:

omz_quantizer --all --dataset_dir <DATASET_DIR>

This will quantize all models for which quantization is supported. Other models are ignored.

The --all option can be replaced with other filter options to quantize only a subset of models. See the “Shared options” section.

The script will attempt to locate Post-Training Optimization Toolkit using several methods:

  1. If the --pot option was specified, then its value will be used as the path to the script to run:

    omz_quantizer --all --dataset_dir <DATASET_DIR> --pot my/openvino/path/post_training_optimization_toolkit/main.py
  2. Otherwise, if the selected Python executable can find the pot entrypoint, then it will be used.

  3. Otherwise, if the OpenVINO toolkit’s setupvars.sh / setupvars.bat script has been executed, the environment variables set by that script will be used to locate Post-Training Optimization Toolkit within the OpenVINO toolkit.

  4. Otherwise, the script will fail.

Model Information Dumper Usage

The basic usage is to run the script like this:

omz_info_dumper --all

The other options accepted by the script are described in the “Shared options” section.

This will print to standard output information about all models. The script’s output is a JSON array, each element of which is a JSON object describing a single model. Each such object has the following keys:

Shared Options

The are certain options that all tools accept.

-h / --help can be used to print a help message:

omz_TOOL --help

There are several mutually exclusive filter options that select the models the tool will process:

To see the available models, you can use the --print_all option. When this option is specified, the tool will print all model names defined in the configuration file and exit:

$ omz_TOOL --print_all
action-recognition-0001-decoder
action-recognition-0001-encoder
age-gender-recognition-retail-0013
driver-action-recognition-adas-0002-decoder
driver-action-recognition-adas-0002-encoder
emotions-recognition-retail-0003
face-detection-adas-0001
face-detection-retail-0004
face-detection-retail-0005
[...]

Either --print_all or one of the filter options must be specified.

Datasets’ Data Downloader Usage

The usage is to run the script like this:

omz_data_downloader -o my/output/directory

This will copy datasets’ data from installed location to the specified location. If -o / --output_dir option is not set, the files will be copied into a directory tree rooted in the current directory.

OpenVINO is a trademark of Intel Corporation or its subsidiaries in the U.S. and/or other countries.

Copyright 2018-2019 Intel Corporation

Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.