namespace InferenceEngine::PluginConfigParams¶
Overview¶
Generic plugin configuration. More…
namespace PluginConfigParams {
// global variables
static constexpr auto KEY_MODEL_PRIORITY = "MODEL_PRIORITY";
static constexpr auto MODEL_PRIORITY_HIGH = "MODEL_PRIORITY_HIGH";
static constexpr auto MODEL_PRIORITY_MED = "MODEL_PRIORITY_MED";
static constexpr auto MODEL_PRIORITY_LOW = "MODEL_PRIORITY_LOW";
static constexpr auto KEY_PERFORMANCE_HINT = "PERFORMANCE_HINT";
static constexpr auto LATENCY = "LATENCY";
static constexpr auto THROUGHPUT = "THROUGHPUT";
static constexpr auto UNDEFINED = "UNDEFINED";
static constexpr auto CUMULATIVE_THROUGHPUT = "CUMULATIVE_THROUGHPUT";
static constexpr auto KEY_PERFORMANCE_HINT_NUM_REQUESTS = "PERFORMANCE_HINT_NUM_REQUESTS";
static constexpr auto KEY_ALLOW_AUTO_BATCHING = "ALLOW_AUTO_BATCHING";
static constexpr auto YES = "YES";
static constexpr auto NO = "NO";
static constexpr auto KEY_AUTO_BATCH_DEVICE_CONFIG = "AUTO_BATCH_DEVICE_CONFIG";
static constexpr auto KEY_AUTO_BATCH_TIMEOUT = "AUTO_BATCH_TIMEOUT";
static constexpr auto KEY_CPU_THREADS_NUM = "CPU_THREADS_NUM";
static constexpr auto KEY_CPU_BIND_THREAD = "CPU_BIND_THREAD";
static constexpr auto NUMA = "NUMA";
static constexpr auto HYBRID_AWARE = "HYBRID_AWARE";
static constexpr auto KEY_CPU_THROUGHPUT_STREAMS = "CPU_THROUGHPUT_STREAMS";
static constexpr auto CPU_THROUGHPUT_NUMA = "CPU_THROUGHPUT_NUMA";
static constexpr auto CPU_THROUGHPUT_AUTO = "CPU_THROUGHPUT_AUTO";
static constexpr auto KEY_PERF_COUNT = "PERF_COUNT";
static constexpr auto KEY_DYN_BATCH_LIMIT = "DYN_BATCH_LIMIT";
static constexpr auto KEY_DYN_BATCH_ENABLED = "DYN_BATCH_ENABLED";
static constexpr auto KEY_CONFIG_FILE = "CONFIG_FILE";
static constexpr auto KEY_LOG_LEVEL = "LOG_LEVEL";
static constexpr auto LOG_NONE = "LOG_NONE";
static constexpr auto LOG_ERROR = "LOG_ERROR";
static constexpr auto LOG_WARNING = "LOG_WARNING";
static constexpr auto LOG_INFO = "LOG_INFO";
static constexpr auto LOG_DEBUG = "LOG_DEBUG";
static constexpr auto LOG_TRACE = "LOG_TRACE";
static constexpr auto KEY_DEVICE_ID = "DEVICE_ID";
static constexpr auto KEY_EXCLUSIVE_ASYNC_REQUESTS = "EXCLUSIVE_ASYNC_REQUESTS";
static constexpr auto KEY_DUMP_EXEC_GRAPH_AS_DOT = "DUMP_EXEC_GRAPH_AS_DOT";
static constexpr auto KEY_ENFORCE_BF16 = "ENFORCE_BF16";
static constexpr auto KEY_CACHE_DIR = "CACHE_DIR";
static constexpr auto KEY_FORCE_TBB_TERMINATE = "FORCE_TBB_TERMINATE";
} // namespace PluginConfigParams
Detailed Documentation¶
Generic plugin configuration.
Global Variables¶
static constexpr auto KEY_MODEL_PRIORITY = "MODEL_PRIORITY"
(Optional) config key that defines what model should be provided with more performant bounded resource first It provides 3 types of levels: High, Medium and Low. The default value is Medium
static constexpr auto KEY_PERFORMANCE_HINT = "PERFORMANCE_HINT"
High-level OpenVINO Performance Hints unlike low-level config keys that are individual (per-device), the hints are smth that every device accepts and turns into device-specific settings.
static constexpr auto KEY_PERFORMANCE_HINT_NUM_REQUESTS = "PERFORMANCE_HINT_NUM_REQUESTS"
(Optional) config key that backs the (above) Performance Hints by giving additional information on how many inference requests the application will be keeping in flight usually this value comes from the actual use-case (e.g. number of video-cameras, or other sources of inputs)
static constexpr auto KEY_ALLOW_AUTO_BATCHING = "ALLOW_AUTO_BATCHING"
(Optional) config key that governs Auto-Batching (with YES/NO values, below)
static constexpr auto YES = "YES"
generic boolean values
static constexpr auto KEY_AUTO_BATCH_DEVICE_CONFIG = "AUTO_BATCH_DEVICE_CONFIG"
Auto-batching configuration, string for the device + batch size, e.g. “GPU(4)”.
static constexpr auto KEY_AUTO_BATCH_TIMEOUT = "AUTO_BATCH_TIMEOUT"
Auto-batching configuration: string with timeout (in ms), e.g. “100”.
static constexpr auto KEY_CPU_THREADS_NUM = "CPU_THREADS_NUM"
Limit #threads
that are used by Inference Engine for inference on the CPU.
static constexpr auto KEY_CPU_BIND_THREAD = "CPU_BIND_THREAD"
The name for setting CPU affinity per thread option.
It is passed to Core::SetConfig(), this option should be used with values: PluginConfigParams::NO (no pinning for CPU inference threads) PluginConfigParams::YES, which is default on the conventional CPUs (pinning threads to cores, best for static benchmarks),
the following options are implemented only for the TBB as a threading option PluginConfigParams::NUMA (pinning threads to NUMA nodes, best for real-life, contented cases) on the Windows and MacOS* this option behaves as YES PluginConfigParams::HYBRID_AWARE (let the runtime to do pinning to the cores types, e.g. prefer the “big” cores for latency tasks) on the hybrid CPUs this option is default
Also, the settings are ignored, if the OpenVINO compiled with OpenMP and any affinity-related OpenMP’s environment variable is set (as affinity is configured explicitly)
static constexpr auto KEY_CPU_THROUGHPUT_STREAMS = "CPU_THROUGHPUT_STREAMS"
Optimize CPU execution to maximize throughput.
It is passed to Core::SetConfig(), this option should be used with values:
KEY_CPU_THROUGHPUT_NUMA creates as many streams as needed to accommodate NUMA and avoid associated penalties
KEY_CPU_THROUGHPUT_AUTO creates bare minimum of streams to improve the performance, this is the most portable option if you have no insights into how many cores you target machine will have (and what is the optimal number of streams)
finally, specifying the positive integer value creates the requested number of streams
static constexpr auto KEY_PERF_COUNT = "PERF_COUNT"
The name for setting performance counters option.
It is passed to Core::SetConfig(), this option should be used with values: PluginConfigParams::YES or PluginConfigParams::NO
static constexpr auto KEY_DYN_BATCH_LIMIT = "DYN_BATCH_LIMIT"
The key defines dynamic limit of batch processing.
Specified value is applied to all following Infer() calls. Inference Engine processes min(batch_limit, original_batch_size) first pictures from input blob. For example, if input blob has sizes 32x3x224x224 after applying plugin.SetConfig({KEY_DYN_BATCH_LIMIT, 10}) Inference Engine primitives processes only beginner subblobs with size 10x3x224x224. This value can be changed before any Infer() call to specify a new batch limit.
The paired parameter value should be convertible to integer number. Acceptable values: -1 - Do not limit batch processing >0 - Direct value of limit. Batch size to process is min(new batch_limit, original_batch)
static constexpr auto KEY_DYN_BATCH_ENABLED = "DYN_BATCH_ENABLED"
The key checks whether dynamic batch is enabled.
static constexpr auto KEY_CONFIG_FILE = "CONFIG_FILE"
This key directs the plugin to load a configuration file.
The value should be a file name with the plugin specific configuration
static constexpr auto KEY_LOG_LEVEL = "LOG_LEVEL"
the key for setting desirable log level.
This option should be used with values: PluginConfigParams::LOG_NONE (default), PluginConfigParams::LOG_ERROR, PluginConfigParams::LOG_WARNING, PluginConfigParams::LOG_INFO, PluginConfigParams::LOG_DEBUG, PluginConfigParams::LOG_TRACE
static constexpr auto KEY_DEVICE_ID = "DEVICE_ID"
the key for setting of required device to execute on values: device id starts from “0” - first device, “1” - second device, etc
static constexpr auto KEY_EXCLUSIVE_ASYNC_REQUESTS = "EXCLUSIVE_ASYNC_REQUESTS"
the key for enabling exclusive mode for async requests of different executable networks and the same plugin.
Sometimes it is necessary to avoid oversubscription requests that are sharing the same device in parallel. E.g. There 2 task executors for CPU device: one - in the Hetero plugin, another - in pure CPU plugin. Parallel execution both of them might lead to oversubscription and not optimal CPU usage. More efficient to run the corresponding tasks one by one via single executor. By default, the option is set to YES for hetero cases, and to NO for conventional (single-plugin) cases Notice that setting YES disables the CPU streams feature (see another config key in this file)
static constexpr auto KEY_DUMP_EXEC_GRAPH_AS_DOT = "DUMP_EXEC_GRAPH_AS_DOT"
This key enables dumping of the internal primitive graph.
Deprecated Use InferenceEngine::ExecutableNetwork::GetExecGraphInfo::serialize method
Should be passed into LoadNetwork method to enable dumping of internal graph of primitives and corresponding configuration information. Value is a name of output dot file without extension. Files <dot_file_name>_init.dot
and <dot_file_name>_perf.dot
will be produced.
static constexpr auto KEY_ENFORCE_BF16 = "ENFORCE_BF16"
The name for setting to execute in bfloat16 precision whenever it is possible.
This option let plugin know to downscale the precision where it see performance benefits from bfloat16 execution Such option do not guarantee accuracy of the network, the accuracy in this mode should be verified separately by the user and basing on performance and accuracy results it should be user’s decision to use this option or not to use
static constexpr auto KEY_CACHE_DIR = "CACHE_DIR"
This key defines the directory which will be used to store any data cached by plugins.
The underlying cache structure is not defined and might differ between OpenVINO releases Cached data might be platform / device specific and might be invalid after OpenVINO version change If this key is not specified or value is empty string, then caching is disabled. The key might enable caching for the plugin using the following code:
ie.SetConfig({{CONFIG_KEY(CACHE_DIR), "cache/"}}, "GPU"); // enables cache for GPU plugin
The following code enables caching of compiled network blobs for devices where import/export is supported
ie.SetConfig({{CONFIG_KEY(CACHE_DIR), "cache/"}}); // enables models cache
static constexpr auto KEY_FORCE_TBB_TERMINATE = "FORCE_TBB_TERMINATE"
The key to decide whether terminate tbb threads when inference engine destructing.
value type: boolean
True explicitly terminate tbb when inference engine destruction
False will not involve additional tbb operations when inference engine destruction
ie.SetConfig({{CONFIG_KEY(FORCE_TBB_TERMINATE), CONFIG_VALUE(YES)}}); // enable