Heterogeneous Plugin

Introducing Heterogeneous Plugin

The heterogeneous plugin enables computing for inference on one network on several devices. Purposes to execute networks in heterogeneous mode

The execution through heterogeneous plugin can be divided to two independent steps:

These steps are decoupled. The setting of affinity can be done automatically using fallback policy or in manual mode.

The fallback automatic policy means greedy behavior and assigns all layers which can be executed on certain device on that device follow priorities.

Some of the topologies are not friendly to heterogeneous execution on some devices or cannot be executed in such mode at all. Example of such networks might be networks having activation layers which are not supported on primary device. If transmitting of data from one part of network to another part in heterogeneous mode takes relatively much time, then it is not much sense to execute them in heterogeneous mode on these devices. In this case you can define heaviest part manually and set affinity thus way to avoid sending of data back and forth many times during one inference.

Annotation of Layers per Device and Default Fallback Policy

Default fallback policy decides which layer goes to which device automatically according to the support in dedicated plugins (FPGA,GPU,CPU,MYRIAD).

Another way to annotate a network is setting affinity manually using CNNLayer::affinity field. This field accepts string values of devices like "CPU" or "FPGA".

The fallback policy does not work if even one layer has an initialized affinity. The sequence should be calling of automating affinity settings and then fix manually.

reader.ReadNetwork("Model.xml");
reader.ReadWeights("Model.bin");
auto network = reader.getNetwork();
// This example demonstrates how to perform default affinity initialization and then
// correct affinity manually for some layers
const std::string device = "HETERO:FPGA,CPU";
// QueryNetworkResult object contains map layer -> device
InferenceEngine::QueryNetworkResult res = core.QueryNetwork(network, device, { });
// update default affinities
res.supportedLayersMap["layerName"] = "CPU";
// set affinities to network
for (auto && layer : res.supportedLayersMap) {
network.getLayerByName(layer->first)->affinity = layer->second;
}
// load network with affinities set before
auto executable_network = core.LoadNetwork(network, device);

If you rely on the default affinity distribution, you can avoid calling InferenceEngine::Core::QueryNetwork and just call InferenceEngine::Core::LoadNetwork instead:

reader.ReadNetwork("Model.xml");
reader.ReadWeights("Model.bin");
auto executable_network = core.LoadNetwork(reader.getNetwork(), "HETERO:FPGA,CPU");

Details of Splitting Network and Execution

During loading of the network to heterogeneous plugin, network is divided to separate parts and loaded to dedicated plugins. Intermediate blobs between these sub graphs are allocated automatically in the most efficient way.

Execution Precision

Precision for inference in heterogeneous plugin is defined by

Examples:

Samples can be used with the following command:

./object_detection_sample_ssd -m <path_to_model>/ModelSSD.xml -i <path_to_pictures>/picture.jpg -d HETERO:FPGA,CPU

where:

You can point more than two devices: -d HETERO:FPGA,GPU,CPU

Analyzing Heterogeneous Execution

After enabling of KEY_HETERO_DUMP_GRAPH_DOT config key, you can dump GraphViz* .dot files with annotations of devices per layer.

Heterogeneous plugin can generate two files:

...
InferenceEngine::Core core;
core.SetConfig({ { KEY_HETERO_DUMP_GRAPH_DOT, YES } }, "HETERO");

You can use GraphViz* utility or converters to .png formats. On Ubuntu* operating system, you can use the following utilities:

You can use performance data (in samples, it is an option -pc) to get performance data on each subgraph.

Here is an example of the output: for Googlenet v1 running on FPGA with fallback to CPU:

subgraph1: 1. input preprocessing (mean data/FPGA):EXECUTED layerType: realTime: 129 cpu: 129 execType:
subgraph1: 2. input transfer to DDR:EXECUTED layerType: realTime: 201 cpu: 0 execType:
subgraph1: 3. FPGA execute time:EXECUTED layerType: realTime: 3808 cpu: 0 execType:
subgraph1: 4. output transfer from DDR:EXECUTED layerType: realTime: 55 cpu: 0 execType:
subgraph1: 5. FPGA output postprocessing:EXECUTED layerType: realTime: 7 cpu: 7 execType:
subgraph1: 6. copy to IE blob:EXECUTED layerType: realTime: 2 cpu: 2 execType:
subgraph2: out_prob: NOT_RUN layerType: Output realTime: 0 cpu: 0 execType: unknown
subgraph2: prob: EXECUTED layerType: SoftMax realTime: 10 cpu: 10 execType: ref
Total time: 4212 microseconds

See Also