Broadcast

Versioned name: Broadcast-3

Category: Data movement

Short description: Broadcast replicates data on the first input to fit a given shape on the second input.

Detailed description:

Broadcast takes the first tensor data and, following broadcasting rules that are specified by mode attribute and the 3rd input axes_mapping, builds a new tensor with shape matching the 2nd input tensor target_shape. target_shape input is a 1D integer tensor that represents required shape of the output.

Attribute mode and the 3rd input axes_mapping are relevant for cases when rank of the input data tensor doesn't match the size of the target_shape input. They both define how axes from data shape are mapped to the output axes. If mode is set to numpy, it means that the standard one-directional numpy broadcasting rules are applied. They are similar to rules that applied in all binary element-wise operations in case when auto_broadcasting attribute is set to numpy, and are similar to rules described at here, when only one-directional broadcasting is applied: input tensor data is broadcasted to target_shape but not vice-versa.

In case if mode is set to bidirectional, then the broadcast rule is similar to numpy.array(input) * numpy.ones(target_shape). Dimensions are right alignment. Two corresponding dimension must have the same value, or one of them is equal to 1. If this attribute value is used, then the 3rd input for the operation shouldn't be provided. The behaviour of such kind of broadcasting is equivalent to ONNX operation Expand.

In case if mode is set to explicit, then 3rd input axes_mapping comes to play. It contains a list of axis indices, each index maps an axis from the 1st input tensor data to axis in the output. The size of axis_mapping should match the rank of input data tensor, so all axes from data tensor should be mapped to axes of the output.

For example, axes_mapping = [1] enables broadcasting of a tensor with shape [C] to shape [N,C,H,W] by replication of initial tensor along dimensions 0, 2 and 3. Another example is broadcasting of tensor with shape [H,W] to shape [N,H,W,C] with axes_mapping = [1, 2]. Both examples requires mode set to explicit and providing mentioned axes_mapping input, because such operations cannot be expressed with axes_mapping set to numpy.

Attributes:

Inputs:

Outputs:

Types

Example

<layer ... type="Broadcast" ...>
<data mode="numpy"/>
<input>
<port id="0">
<dim>16</dim>
<dim>1</dim>
<dim>1</dim>
</port>
<port id="1">
<dim>4</dim>
</port>
</input>
<output>
<port id="2">
<dim>1</dim>
<dim>16</dim>
<dim>50</dim>
<dim>50</dim>
</port>
</output>
</layer>
<layer ... type="Broadcast" ...>
<data mode="explicit"/>
<input>
<port id="0">
<dim>16</dim>
</port>
<port id="1">
<dim>4</dim>
</port>
<port id="1">
<dim>1</dim>
</port>
</input>
<output>
<port id="2">
<dim>1</dim>
<dim>16</dim>
<dim>50</dim>
<dim>50</dim>
</port>
</output>
</layer>
<layer ... type="Broadcast" ...>
<data mode="explicit"/>
<input>
<port id="0">
<dim>50</dim>
<dim>50</dim>
</port>
<port id="1">
<dim>4</dim>
</port>
<port id="1">
<dim>2</dim>
</port>
</input>
<output>
<port id="2">
<dim>1</dim>
<dim>50</dim>
<dim>50</dim>
<dim>16</dim>
</port>
</output>
</layer>
<layer ... type="Broadcast" ...>
<data mode="bidirectional"/>
<input>
<port id="0">
<dim>16</dim>
<dim>1</dim>
<dim>1</dim>
</port>
<port id="1">
<dim>4</dim>
</port>
</input>
<output>
<port id="2">
<dim>1</dim>
<dim>16</dim>
<dim>50</dim>
<dim>50</dim>
</port>
</output>
</layer>