Introduction to the Inference Engine Neural Network Builder

NOTE: It is a preview version of the Inference Engine Neural Network Builder API for evaluation purpose only. Module structure and API itself may be changed in future releases.

This API extends the Inference Engine functionality that allows to create and modify topologies in the source code.

Network Builder

InferenceEngine::Builder::Network allows to create and modify graphs. This class does not modify the original graph if it is used for graph modification. Instead, it creates a copy of the original graph and works with a copied object. Also the use of this class allows to avoid invalid graphs because it checks:

If a graph contains custom layers and shape inference is required, you should add Functions for shape inference to the Network builder from custom Context.

Network builder contains the following methods for graph modification:

The function convertToICNNNetwork(...) converts INetwork to CNNNetwork.

Layer Builder

InferenceEngine::Builder::Layer class creates and modifies layers. This class allows you to modify all layer parameters, add new constant data, change type and name of the layer, and create a valid layer object.

Builders for Standard layers

Each default Inference Engine layer has a special builder added in order to simplify the process of layer creation. These builders hide all unnecessary methods for the specific layer and add new methods.

Below you can see the list of builders for default layers:

Known Limitations

The Inference Engine Neural Network Builder API does not support the TensorIterator layer.

How to Use

To use the NN Builder API, include ie_builders.hpp header which includes all Inference Engine builders.

After that, all builders will be available to use.

The NN Builder can be created in different ways:

// Get network from the reader
InferenceEngine::CNNNetwork cnnNetwork = networkReader.getNetwork();
// Create NN builder with a name
// Create NN builder from CNNNetwork
// Build a network
InferenceEngine::INetwork::Ptr iNetwork = graph2.build();
// Create NN builder from INetwork
// Create an Inference Engine context
InferenceEngine::Context customContext;
// Add shape infer extension
customContext.addExtension(customShapeInferExtension);
// Create NN builder with custom context (all other examples also allow to create graph with custom context)
InferenceEngine::Builder::Network graph4(customContext, *iNetwork);

You can modify a graph with the NN Builder:

// Create NN builder with a name
// Add new layers
// Add an input layer builder in place
idx_t inputLayerId = graph.addLayer(Builder::InputLayer("in").setPort(Port({1, 3, 22, 22})));
// Add a ReLU layer builder in place with a negative slope 0.1 and connect it with output port 0 of the Input layer builder
// In this example, layerId is equal to new Input layer builder ID, port index is not set, because 0 is a default value ({layerId} == {layerId, 0})
idx_t relu1Id = graph.addLayer({{inputLayerId}}, Builder::ReLULayer("relu1").setNegativeSlope(0.1f));
// Add a ScaleShift layer builder in place
InferenceEngine::Blob::Ptr blobWithScaleShiftBiases = make_shared_blob<float>(TensorDesc(Precision::FP32, {3}, Layout::C));
blobWithScaleShiftBiases->allocate();
auto *data = blobWithScaleShiftBiases->buffer().as< float *>();
data[0] = 1;
data[1] = 2;
data[2] = 3;
idx_t biasesId = graph.addLayer(Builder::ConstLayer("biases").setData(blobWithScaleShiftBiases));
idx_t scaleShiftId = graph.addLayer(Builder::ScaleShiftLayer("scaleShift1").setBiases(blobWithScaleShiftBiases));
// Connect ScaleShift layer in place with relu1
graph.connect({relu1Id}, {scaleShiftId}); // Also port indexes could be defined (0 is default value) builder.connect({layerId, outPortIdx}, {scaleShiftId, inPortIdx});
graph.connect({biasesId}, {scaleShiftId, 2}); // Connect biases as input
// Create a ReLU layer builder in place with a negative slope 0.2 using generic layer builder and connect it with scaleShift
idx_t relu2Id = graph.addLayer({{scaleShiftId}}, Builder::Layer("ReLU", "relu2").setParameters({{"negative_slope", 0.2f}}).setOutputPorts({Port()}).setInputPorts({Port()}));
// All branches in the graph should end with the Output layer. The following line creates the Output layer
idx_t outId = graph.addLayer({{relu2Id, 0}}, Builder::OutputLayer("out"));
// Build a network
InferenceEngine::INetwork::Ptr finalNetwork = graph.build();
std::shared_ptr<InferenceEngine::ICNNNetwork> cnnNetwork = InferenceEngine::Builder::convertToICNNNetwork(finalNetwork);
// Remove the relu2 layer from the topology
std::vector<InferenceEngine::Connection> connections = graph.getLayerConnections(relu2Id);
for (const auto& connection : connections) {
graph.disconnect(connection);
}
graph.removeLayer(relu2Id);
// Connect scaleShift1 and out
graph.connect({scaleShiftId}, {outId});
// Build a network without relu2
InferenceEngine::INetwork::Ptr changedNetwork = graph.build();