Optimize Preprocessing Computation

Model Optimizer performs preprocessing to a model. It is possible to optimize this step and improve first inference time, to do that, follow the tips bellow:

  • Image mean/scale parameters

    Make sure to use the input image mean/scale parameters (--scale and –mean_values) with the Model Optimizer when you need pre-processing. It allows the tool to bake the pre-processing into the IR to get accelerated by the Inference Engine.

  • RGB vs. BGR inputs

    If, for example, your network assumes the RGB inputs, the Model Optimizer can swap the channels in the first convolution using the --reverse_input_channels command line option, so you do not need to convert your inputs to RGB every time you get the BGR image, for example, from OpenCV*.

  • Larger batch size

    Notice that the devices like GPU are doing better with larger batch size. While it is possible to set the batch size in the runtime using the Inference Engine ../IE_DG/ShapeInference.md “ShapeInference feature”.

  • Resulting IR precision

    The resulting IR precision, for instance, FP16 or FP32, directly affects performance. As CPU now supports FP16 (while internally upscaling to FP32 anyway) and because this is the best precision for a GPU target, you may want to always convert models to FP16. Notice that this is the only precision that Intel Movidius Myriad 2 and Intel Myriad X VPUs support.