How to configure OpenVINO™ launcher

For enabling OpenVINO™ launcher you need to add framework: dlsdk in launchers section of your configuration file and provide following parameters:

Note: You can generate executable blob using compile_tool. Before evaluation executable blob, please make sure that selected device support it.

launcher may optionally provide model parameters in source framework format which will be converted to Inference Engine IR using Model Optimizer. If you want to use Model Optimizer for model conversion, please view Model Optimizer Developer Guide. You can provide:

In case when you want to determine additional parameters for model conversion (data_type, input_shape and so on), you can use mo_params for arguments with values and mo_flags for positional arguments like legacy_mxnet_model . Full list of supported parameters you can find in Model Optimizer Developer Guide.

Model will be converted before every evaluation. You can provide converted_model_dir for saving converted model in specific folder, otherwise, converted models will be saved in path provided via -C command line argument or source model directory.

Launcher understands which batch size will be used from model intermediate representation (IR). If you want to use batch for infer, please, provide model with required batch or convert it using specific parameter in mo_params.

Additionally you can provide device specific parameters:

For setting device specific flags, you are able to use -dc or --device_config command line option. Device config should be represented as YML file with dictionary, where keys are plugin configuration keys and values are their values respectively. Each supported device has own set of supported configuration parameters which can be found on device page in Inference Engine development guide

Note: Since OpenVINO 2020.4 on platforms with native bfloat16 support models will be executed on this precision by default. For disabling this behaviour, you need to use device_config with following configuration:

ENFORCE_BF16: "NO"

Device config example can be found here()

Beside that, you can launch model in async_mode, enable this option and optionally provide the number of infer requests (num_requests), which will be used in evaluation process. By default, if num_requests not provided or used value AUTO, automatic number request assignment for specific device will be performed For multi device configuration async mode used always. You can provide number requests for each device as part device specification: MULTI:device_1(num_req_1),device_2(num_req_2) or in num_requests config section (for this case comma-separated list of integer numbers or one value if number requests for all devices equal can be used).

Note: not all models support async execution, in cases when evaluation can not be run in async, the inference will be switched to sync.

Specifying model inputs in config.

In case when you model has several inputs you should provide list of input layers in launcher config section using key inputs. Each input description should has following info:

OpenVINO™ launcher config example:

launchers:
- framework: dlsdk
device: HETERO:FPGA,CPU
caffe_model: path_to_model/alexnet.prototxt
caffe_weights: path_to_weights/alexnet.caffemodel
adapter: classification
mo_params:
batch: 4
mo_flags:
- reverse_input_channels
cpu_extensions: cpu_extentions_avx512.so