Post-training Optimization Toolkit API

The toolkit provides capabilities to use optimization algorithms through an API. It means that the user embeds the optimization code into its own inference pipeline, which is usually a model validation script for a full-precision model. Here we describe a sample of how to do this embedding on the ImageNet classification task.

In order to use optimization features one should implement the following interfaces which are required for the optimization process:

The sample demonstrates quantization of the classification model using API implementation described above. The sample is designed for solving classification task on the ImageNet dataset and works only for models with TensorFlow* contrib preprocessing. Sample implementation is available in the sample folder.

How to Run the Sample

In the instructions below, the Post-Training Optimization Tool directory <INSTALL_DIR>/deployment_tools/tools/post_training_optimization_toolkit is referred to as <POT_DIR>. <INSTALL_DIR> is the directory where Intel® Distribution of OpenVINO™ toolkit is installed.

  1. Move to the Model Downloader folder:
    cd <POT_DIR>/libs/open_model_zoo/tools/downloader
  2. Launch the downloader tool to download a model with TensorFlow* preprocessing from the Open Model Zoo repository. The sample was tested with with the mobilenet-v2-1.0-224 model.
    python3 downloader.py --name <MODEL_NAME>
  3. Launch converter tool to generate the IRv10 model:
    python3 converter.py --name <MODEL_NAME> --mo <PATH_TO_MODEL_OPTIMIZER>/mo.py
  4. Move to the sample folder and launch the sample script:
    cd <POT_DIR>/sample
    python3 sample.py -m <PATH_TO_IR_XML> -a <IMAGENET_ANNOTATION_FILE> -d <IMAGENER_IMAGES>
    Optional: you can specify weights directly using the -w, --weights options.

WARNING: Sample works with predefined central crop and resize. In other words, it suits only for models with TensorFlow* preproc.