Using Shape Inference

OpenVINO™ provides the following methods for runtime model reshaping:

NOTES:

  • Starting with the 2021.1 release, the Model Optimizer converts topologies keeping shape-calculating sub-graphs by default, which enables correct shape propagation during reshaping in most cases.
  • Older versions of IRs are not guaranteed to reshape successfully. Please regenerate them with the Model Optimizer of the latest version of OpenVINO™.
  • If an ONNX model does not have a fully defined input shape and the model was imported with the ONNX importer, reshape the model before loading it to the plugin.

You can change input shapes multiple times using the InferenceEngine::CNNNetwork::reshape and InferenceEngine::CNNNetwork::setBatchSize methods in any order. If a model has a hard-coded batch dimension, use InferenceEngine::CNNNetwork::setBatchSize first to change the batch, then call InferenceEngine::CNNNetwork::reshape to update other dimensions, if needed.

Inference Engine takes three kinds of a model description as an input, which are converted into an InferenceEngine::CNNNetwork object:

  1. Intermediate Representation (IR) through InferenceEngine::Core::ReadNetwork
  2. ONNX model through InferenceEngine::Core::ReadNetwork
  3. nGraph::Function through the constructor of InferenceEngine::CNNNetwork

InferenceEngine::CNNNetwork keeps an ngraph::Function object with the model description internally. The object should have fully defined input shapes to be successfully loaded to the Inference Engine plugins. To resolve undefined input dimensions of a model, call the CNNNetwork::reshape method providing new input shapes before loading to the Inference Engine plugin.

Run the following code right after InferenceEngine::CNNNetwork creation to explicitly check for model input names and shapes:

CNNNetwork network = ... // read IR / ONNX model or create from nGraph::Function explicitly
const auto parameters = network.getFunction()->get_parameters();
for (const auto & parameter : parameters) {
std::cout << "name: " << parameter->get_friendly_name() << " shape: " << parameter->get_partial_shape() << std::endl;
if (parameter->get_partial_shape().is_dynamic())
std::cout << "ATTENTION: Input shape is not fully defined. Use the CNNNetwork::reshape method to resolve it." << std::endl;
}

To feed input data of a shape that is different from the model input shape, reshape the model first.

Once the input shape of InferenceEngine::CNNNetwork is set, call the InferenceEngine::Core::LoadNetwork method to get an InferenceEngine::ExecutableNetwork object for inference with updated shapes.

There are other approaches to reshape the model during the stage of IR generation or nGraph::Function creation.

Practically, some models are not ready to be reshaped. In this case, a new input shape cannot be set with the Model Optimizer or the InferenceEngine::CNNNetwork::reshape method.

Troubleshooting Reshape Errors

Operation semantics may impose restrictions on input shapes of the operation. Shape collision during shape propagation may be a sign that a new shape does not satisfy the restrictions. Changing the model input shape may result in intermediate operations shape collision.

Examples of such operations:

Model structure and logic should not change significantly after model reshaping.

  • The Global Pooling operation is commonly used to reduce output feature map of classification models output. Having the input of the shape [N, C, H, W], Global Pooling returns the output of the shape [N, C, 1, 1]. Model architects usually express Global Pooling with the help of the Pooling operation with the fixed kernel size [H, W]. During spatial reshape, having the input of the shape [N, C, H1, W1], Pooling with the fixed kernel size [H, W] returns the output of the shape [N, C, H2, W2], where H2 and W2 are commonly not equal to 1. It breaks the classification model structure. For example, publicly available Inception family models from TensorFlow* have this issue.
  • Changing the model input shape may significantly affect its accuracy. For example, Object Detection models from TensorFlow have resizing restrictions by design. To keep the model valid after the reshape, choose a new input shape that satisfies conditions listed in the pipeline.config file. For details, refer to the Tensorflow Object Detection API models resizing techniques.

Usage of Reshape Method

The primary method of the feature is InferenceEngine::CNNNetwork::reshape. It gets new input shapes and propagates it from input to output for all intermediates layers of the given network. The method takes InferenceEngine::ICNNNetwork::InputShapes - a map of pairs: name of input data and its dimension.

The algorithm for resizing network is the following:

1) Collect the map of input names and shapes from Intermediate Representation (IR) using helper method InferenceEngine::CNNNetwork::getInputShapes

2) Set new input shapes

3) Call reshape

Here is a code example:

// ------------- 0. Read IR and image ----------------------------------------------
InferenceEngine::CNNNetwork network = core.ReadNetwork("path/to/IR/xml");
cv::Mat image = cv::imread("path/to/image");
// ---------------------------------------------------------------------------------
// ------------- 1. Collect the map of input names and shapes from IR---------------
auto input_shapes = network.getInputShapes();
// ---------------------------------------------------------------------------------
// ------------- 2. Set new input shapes -------------------------------------------
std::string input_name;
std::tie(input_name, input_shape) = *input_shapes.begin(); // let's consider first input only
input_shape[0] = batch_size; // set batch size to the first input dimension
input_shape[2] = image.rows; // changes input height to the image one
input_shape[3] = image.cols; // changes input width to the image one
input_shapes[input_name] = input_shape;
// ---------------------------------------------------------------------------------
// ------------- 3. Call reshape ---------------------------------------------------
network.reshape(input_shapes);
// ---------------------------------------------------------------------------------
//...
// ------------- 4. Loading model to the device ------------------------------------
std::string device = "CPU";
InferenceEngine::ExecutableNetwork executable_network = core.LoadNetwork(network, device);
// ---------------------------------------------------------------------------------
This class contains all the information about the Neural Network and the related binary information.
Definition: ie_cnn_network.h:36
This class represents Inference Engine Core entity.
Definition: ie_core.hpp:29
wrapper over IExecutableNetwork
Definition: ie_executable_network.hpp:30
std::vector< size_t > SizeVector
Represents tensor size.
Definition: ie_common.h:27
Represents shape for input data.
Definition: ie_c_api.h:263
Represents shapes for all input data.
Definition: ie_c_api.h:272

Shape Inference feature is used in Smart classroom sample.

Extensibility

Inference Engine provides a special mechanism that allows to add the support of shape inference for custom operations. This mechanism is described in the Extensibility documentation