How to configure OpenVINO™ launcher

For enabling OpenVINO™ launcher you need to add framework: dlsdk in launchers section of your configuration file and provide following parameters:

launcher may optionally provide model parameters in source framework format which will be converted to Inference Engine IR using Model Optimizer. If you want to use Model Optimizer for model conversion, please view Model Optimizer Developer Guide. You can provide:

In case when you want to determine additional parameters for model conversion (data_type, input_shape and so on), you can use mo_params for arguments with values and mo_flags for positional arguments like legacy_mxnet_model . Full list of supported parameters you can find in Model Optimizer Developer Guide.

Model will be converted before every evaluation. You can provide converted_model_dir for saving converted model in specific folder, otherwise, converted models will be saved in path provided via -C command line argument or source model directory.

Launcher understands which batch size will be used from model intermediate representation (IR). If you want to use batch for infer, please, provide model with required batch or convert it using specific parameter in mo_params.

Additionally you can provide device specific parameters:

OpenVINO™ launcher config example:

launchers:
- framework: dlsdk
device: HETERO:FPGA,CPU
caffe_model: path_to_model/alexnet.prototxt
caffe_weights: path_to_weights/alexnet.caffemodel
adapter: classification
mo_params:
batch: 4
mo_flags:
- reverse_input_channels
cpu_extensions: cpu_extentions_avx512.so