Using Dynamic Batching

Dynamic Batching feature allows you+ to dynamically change batch size for inference calls within preset batch size limit. This feature might be useful when batch size is unknown beforehand, and using extra large batch size is undesired or impossible due to resource limitations. For example, face detection with person age, gender, or mood recognition is a typical usage scenario.

Usage

You can activate Dynamic Batching by setting KEY_DYN_BATCH_ENABLED flag to YES in a configuration map that is passed to the plugin while loading a network. This configuration creates an ExecutableNetwork object that will allow setting batch size dynamically in all of its infer requests using SetBatch() method. The batch size that was set in passed CNNNetwork object will be used as a maximum batch size limit.

Here is a code example:

int dynBatchLimit = FLAGS_bl; //take dynamic batch limit from command line option
// Read network model
InferenceEngine::CNNNetwork network = core.ReadNetwork("sample.xml");
// enable dynamic batching and prepare for setting max batch limit
const std::map<std::string, std::string> dyn_config =
{ { InferenceEngine::PluginConfigParams::KEY_DYN_BATCH_ENABLED, InferenceEngine::PluginConfigParams::YES } };
network.setBatchSize(dynBatchLimit);
// create executable network and infer request
auto executable_network = core.LoadNetwork(network, "CPU", dyn_config);
auto infer_request = executable_network.CreateInferRequest();
// ...
// process a set of images
// dynamically set batch size for subsequent Infer() calls of this request
size_t batchSize = imagesData.size();
infer_request.SetBatch(batchSize);
infer_request.Infer();
// ...
// process another set of images
batchSize = imagesData2.size();
infer_request.SetBatch(batchSize);
infer_request.Infer();
This class contains all the information about the Neural Network and the related binary information.
Definition: ie_cnn_network.h:36
This class represents Inference Engine Core entity.
Definition: ie_core.hpp:29
static constexpr auto YES
generic boolean values
Definition: ie_plugin_config.hpp:186

Limitations

Currently, certain limitations for using Dynamic Batching exist:

  • Use Dynamic Batching with CPU and GPU plugins only.
  • Use Dynamic Batching on topologies that consist of certain layers only:
    • Convolution
    • Deconvolution
    • Activation
    • LRN
    • Pooling
    • FullyConnected
    • SoftMax
    • Split
    • Concatenation
    • Power
    • Eltwise
    • Crop
    • BatchNormalization
    • Copy

Do not use layers that might arbitrary change tensor shape (such as Flatten, Permute, Reshape), layers specific to object detection topologies (ROIPooling, ProirBox, DetectionOutput), and custom layers. Topology analysis is performed during the process of loading a network into plugin, and if topology is not applicable, an exception is generated.