Shape Inference feature enables resizing network before loading it to a plugin. It makes possible to specify differently-sized input upon reading the model by the Inference Engine without going back to the Model Optimizer. The feature is exposed to replace InferenceEngine::ICNNNetwork::SetBatchSize
as well, as setting batch is a special case of setting the whole input shape.
The primary method of the feature is InferenceEngine::CNNNetwork::reshape
. It gets new input shapes and propagates it from input to output for all intermediates layers of the given network. The method takes InferenceEngine::ICNNNetwork::InputShapes
- a map of pairs: name of input data and its dimension.
The algorithm for resizing network is the following:
1) Collect the map of input names and shapes from Intermediate Representation (IR) using helper method InferenceEngine::CNNNetwork::getInputShapes
2) Set new input shapes
3) Call reshape
Here is a code example:
Shape Inference feature is used in Smart classroom sample.
Custom Shape Inference functions are registered via calling InferenceEngine::ICNNNetwork::AddExtension
with implemented InferenceEngine::IShapeInferExtension
- holder of the custom implementations. Holder requires to implement 2 key methods:
InferenceEngine::IShapeInferExtension::getShapeInferImpl
- to return custom shape infer implementation for the given typeInferenceEngine::IShapeInferExtension::getShapeInferTypes
- to provide all custom types Custom shape infer implementation is represented by InferenceEngine::IShapeInferImpl::inferShapes
. It's not possible to override built-in (see below Supported layer types) shape infer functions. Custom type must be different from supported once. Extensibility mechanism of Shape Inference feature is demonstrated in Hello Shape Infer SSD sample.Shape Inference is a preview feature with a set of limitations:
dim
attribute of the Reshape layer can't be resized.