Generic plugin configuration. More...
Variables | |
static constexpr auto | YES = "YES" |
generic boolean values | |
static constexpr auto | NO = "NO" |
static constexpr auto | KEY_CPU_THREADS_NUM = "CPU_THREADS_NUM" |
Limit #threads that are used by Inference Engine for inference on the CPU. | |
static constexpr auto | KEY_CPU_BIND_THREAD = "CPU_BIND_THREAD" |
The name for setting CPU affinity per thread option. More... | |
static constexpr auto | NUMA = "NUMA" |
static constexpr auto | CPU_THROUGHPUT_NUMA = "CPU_THROUGHPUT_NUMA" |
Optimize CPU execution to maximize throughput. More... | |
static constexpr auto | CPU_THROUGHPUT_AUTO = "CPU_THROUGHPUT_AUTO" |
static constexpr auto | KEY_CPU_THROUGHPUT_STREAMS = "CPU_THROUGHPUT_STREAMS" |
static constexpr auto | GPU_THROUGHPUT_AUTO = "GPU_THROUGHPUT_AUTO" |
Optimize GPU plugin execution to maximize throughput. More... | |
static constexpr auto | KEY_GPU_THROUGHPUT_STREAMS = "GPU_THROUGHPUT_STREAMS" |
static constexpr auto | KEY_PERF_COUNT = "PERF_COUNT" |
The name for setting performance counters option. More... | |
static constexpr auto | KEY_DYN_BATCH_LIMIT = "DYN_BATCH_LIMIT" |
The key defines dynamic limit of batch processing. More... | |
static constexpr auto | KEY_DYN_BATCH_ENABLED = "DYN_BATCH_ENABLED" |
static constexpr auto | KEY_DUMP_QUANTIZED_GRAPH_AS_DOT = "DUMP_QUANTIZED_GRAPH_AS_DOT" |
static constexpr auto | KEY_DUMP_QUANTIZED_GRAPH_AS_IR = "DUMP_QUANTIZED_GRAPH_AS_IR" |
static constexpr auto | KEY_SINGLE_THREAD = "SINGLE_THREAD" |
The key controls threading inside Inference Engine. More... | |
static constexpr auto | KEY_CONFIG_FILE = "CONFIG_FILE" |
This key directs the plugin to load a configuration file. More... | |
static constexpr auto | KEY_DUMP_KERNELS = "DUMP_KERNELS" |
This key enables dumping of the kernels used by the plugin for custom layers. More... | |
static constexpr auto | KEY_TUNING_MODE = "TUNING_MODE" |
This key controls performance tuning done or used by the plugin. More... | |
static constexpr auto | TUNING_CREATE = "TUNING_CREATE" |
static constexpr auto | TUNING_USE_EXISTING = "TUNING_USE_EXISTING" |
static constexpr auto | TUNING_DISABLED = "TUNING_DISABLED" |
static constexpr auto | TUNING_UPDATE = "TUNING_UPDATE" |
static constexpr auto | TUNING_RETUNE = "TUNING_RETUNE" |
static constexpr auto | KEY_TUNING_FILE = "TUNING_FILE" |
This key defines the tuning data filename to be created/used. | |
static constexpr auto | KEY_LOG_LEVEL = "LOG_LEVEL" |
the key for setting desirable log level. More... | |
static constexpr auto | LOG_NONE = "LOG_NONE" |
static constexpr auto | LOG_ERROR = "LOG_ERROR" |
static constexpr auto | LOG_WARNING = "LOG_WARNING" |
static constexpr auto | LOG_INFO = "LOG_INFO" |
static constexpr auto | LOG_DEBUG = "LOG_DEBUG" |
static constexpr auto | LOG_TRACE = "LOG_TRACE" |
static constexpr auto | KEY_DEVICE_ID = "DEVICE_ID" |
the key for setting of required device to execute on values: device id starts from "0" - first device, "1" - second device, etc | |
static constexpr auto | KEY_EXCLUSIVE_ASYNC_REQUESTS = "EXCLUSIVE_ASYNC_REQUESTS" |
the key for enabling exclusive mode for async requests of different executable networks and the same plugin. More... | |
static constexpr auto | KEY_DUMP_EXEC_GRAPH_AS_DOT = "DUMP_EXEC_GRAPH_AS_DOT" |
This key enables dumping of the internal primitive graph. More... | |
Generic plugin configuration.
|
static |
Optimize CPU execution to maximize throughput.
It is passed to IInferencePlugin::SetConfig(), this option should be used with values:
|
static |
Optimize GPU plugin execution to maximize throughput.
It is passed to IInferencePlugin::SetConfig(), this option should be used with values:
|
static |
This key directs the plugin to load a configuration file.
The value should be a file name with the plugin specific configuration
|
static |
The name for setting CPU affinity per thread option.
It is passed to IInferencePlugin::SetConfig(), this option should be used with values: PluginConfigParams::YES (pinning threads to cores, best for static benchmarks), PluginConfigParams::NUMA (pinning therads to NUMA nodes, best for real-life, contented cases) this is TBB-specific knob, and the only pinning option (beyond 'NO', below) on the Windows* PluginConfigParams::NO (no pinning for CPU inference threads) All settings are ignored, if the OpenVINO compiled with OpenMP threading and any affinity-related OpenMP's environment variable is set (as affinity is configured explicitly)
|
static |
This key enables dumping of the internal primitive graph.
Should be passed into LoadNetwork method to enable dumping of internal graph of primitives and corresponding configuration information. Value is a name of output dot file without extension. Files <dot_file_name>_init.dot
and <dot_file_name>_perf.dot
will be produced.
|
static |
This key enables dumping of the kernels used by the plugin for custom layers.
This option should be used with values: PluginConfigParams::YES or PluginConfigParams::NO (default)
|
static |
The key defines dynamic limit of batch processing.
Specified value is applied to all following Infer() calls. Inference Engine processes min(batch_limit, original_batch_size) first pictures from input blob. For example, if input blob has sizes 32x3x224x224 after applying plugin.SetConfig({KEY_DYN_BATCH_LIMIT, 10}) Inference Engine primitives processes only beginner subblobs with size 10x3x224x224. This value can be changed before any Infer() call to specify a new batch limit.
The paired parameter value should be convertible to integer number. Acceptable values: -1 - Do not limit batch processing >0 - Direct value of limit. Batch size to process is min(new batch_limit, original_batch)
|
static |
the key for enabling exclusive mode for async requests of different executable networks and the same plugin.
Sometimes it is necessary to avoid oversubscription requests that are sharing the same device in parallel. E.g. There 2 task executors for CPU device: one - in the Hetero plugin, another - in pure CPU plugin. Parallel execution both of them might lead to oversubscription and not optimal CPU usage. More efficient to run the corresponding tasks one by one via single executor. By default, the option is set to YES for hetero cases, and to NO for conventional (single-plugin) cases Notice that setting YES disables the CPU streams feature (see another config key in this file)
|
static |
the key for setting desirable log level.
This option should be used with values: PluginConfigParams::LOG_NONE (default), PluginConfigParams::LOG_ERROR, PluginConfigParams::LOG_WARNING, PluginConfigParams::LOG_INFO, PluginConfigParams::LOG_DEBUG, PluginConfigParams::LOG_TRACE
|
static |
The name for setting performance counters option.
It is passed to IInferencePlugin::SetConfig(), this option should be used with values: PluginConfigParams::YES or PluginConfigParams::NO
|
static |
The key controls threading inside Inference Engine.
It is passed to IInferencePlugin::SetConfig(), this option should be used with values: PluginConfigParams::YES or PluginConfigParams::NO
|
static |
This key controls performance tuning done or used by the plugin.
This option should be used with values: PluginConfigParams::TUNING_DISABLED (default) PluginConfigParams::TUNING_USE_EXISTING - use existing data from tuning file PluginConfigParams::TUNING_CREATE - create tuning data for parameters not present in tuning file PluginConfigParams::TUNING_UPDATE - perform non-tuning updates like removal of invalid/deprecated data PluginConfigParams::TUNING_RETUNE - create tuning data for all parameters, even if already present
For values TUNING_CREATE and TUNING_RETUNE the file will be created if it does not exist.