Data Structures | Macros | Typedefs | Enumerations | Variables
ie_layers.h File Reference

a header file for internal Layers structure to describe layers information More...

#include <algorithm>
#include <cctype>
#include <iterator>
#include <limits>
#include <map>
#include <memory>
#include <string>
#include <vector>
#include "ie_blob.h"
#include "ie_common.h"
#include "ie_data.h"
#include "ie_layers_property.hpp"

Go to the source code of this file.

Data Structures

struct  PortMap
 
struct  Body
 Describes a tensor iterator body. More...
 

Macros

#define DEFINE_PROP(prop_name)
 convinenent way to declare property with backward compatibility to 2D members More...
 

Typedefs

using InferenceEngine::GenericLayer = class CNNLayer
 Alias for CNNLayer object.
 

Enumerations

enum  PoolType
 Defines available pooling types.
 
enum  eBinaryConvolutionMode
 Defines possible modes of binary convolution operation.
 
enum  eOperation
 Defines possible operations that can be used.
 
enum  CellType
 Direct type of recurrent cell (including subtypes) Description of particular cell semantics is in LSTMCell, GRUCell, RNNCell.
 
enum  Direction
 Direction of iteration through sequence dimension.
 
enum  ePadMode
 Defines possible modes of pad operation.
 

Variables

struct {
   std::string   InferenceEngine::name
 Layer name.
 
   std::string   InferenceEngine::type
 Layer type.
 
   Precision   InferenceEngine::precision
 Layer precision.
 
}; 
 This is an internal common Layer parameter parsing arguments. More...
 
class {
using Ptr = std::shared_ptr< CNNLayer >
 A shared pointer to CNNLayer.
 
   std::string   InferenceEngine::name
 Layer name.
 
   std::string   InferenceEngine::type
 Layer type.
 
   Precision   InferenceEngine::precision
 Layer base operating precision.
 
   std::vector< DataPtr >   InferenceEngine::outData
 A vector of pointers to the output data elements of this layer in the di-graph (order matters)
 
   std::vector< DataWeakPtr >   InferenceEngine::insData
 A vector of weak pointers to the input data elements of this layer in the di-graph (order matters)
 
   Ptr   InferenceEngine::_fusedWith
 If suggested to fuse - a pointer to the layer which needs to be fused with this layer.
 
   UserValue   InferenceEngine::userValue
 Convenience user values to store in this object as extra data.
 
   std::string   InferenceEngine::affinity
 Layer affinity set by user.
 
   std::map< std::string, std::string >   InferenceEngine::params
 Map of pairs: (parameter name, parameter value)
 
   std::map< std::string, Blob::Ptr >   InferenceEngine::blobs
 Map of pairs: (name, weights/biases blob)
 
   std::shared_ptr< ngraph::Node >   node
 
}; 
 This is a base abstraction Layer - all DNN Layers inherit from this class. More...
 
Blob::Ptr InferenceEngine::_weights
 A pointer to a weights blob.
 
Blob::Ptr InferenceEngine::_biases
 A pointer to a biases blob.
 
PropertyVector< unsigned int > InferenceEngine::_kernel
 A convolution kernel array [X, Y, Z, ...]. More...
 
unsigned int & InferenceEngine::_kernel_x = _kernel .at(X_AXIS)
 
unsigned int & InferenceEngine::_kernel_y = _kernel .at(Y_AXIS)
 
PropertyVector< unsigned int > InferenceEngine::_padding
 A convolution paddings begin array [X, Y, Z, ...]. More...
 
unsigned int & InferenceEngine::_padding_x = _padding .at(X_AXIS)
 
unsigned int & InferenceEngine::_padding_y = _padding .at(Y_AXIS)
 
PropertyVector< unsigned int > InferenceEngine::_pads_end
 A convolution paddings end array [X, Y, Z, ...]. More...
 
PropertyVector< unsigned int > InferenceEngine::_stride
 A convolution strides array [X, Y, Z, ...]. More...
 
unsigned int & InferenceEngine::_stride_x = _stride .at(X_AXIS)
 
unsigned int & InferenceEngine::_stride_y = _stride .at(Y_AXIS)
 
PropertyVector< unsigned int > InferenceEngine::_dilation
 A convolution dilations array [X, Y, Z, ...].
 
unsigned int & InferenceEngine::_dilation_x = _dilation .at(X_AXIS)
 
unsigned int & InferenceEngine::_dilation_y = _dilation .at(Y_AXIS)
 
unsigned int InferenceEngine::_out_depth = 0u
 A number of output feature maps (size) generating the 3'rd output dimension.
 
unsigned int InferenceEngine::_group = 1u
 Number of groups.
 
std::string InferenceEngine::_auto_pad
 Auto padding type.
 
unsigned int InferenceEngine::_deformable_group = 1u
 Number of deformable groups.
 
PoolType InferenceEngine::_type = MAX
 A pooling type.
 
bool InferenceEngine::_exclude_pad = false
 A flag that indicates if padding is excluded or not.
 
eBinaryConvolutionMode InferenceEngine::_mode = xnor_popcount
 Mode of binary convolution operation.
 
unsigned int InferenceEngine::_in_depth = 0u
 A number of input feature maps (size) generating the 3'rd input dimension.
 
float InferenceEngine::_pad_value = 0.0f
 A pad value which is used to fill pad area.
 
unsigned int InferenceEngine::_out_num = 0
 A size of output.
 
unsigned int InferenceEngine::_axis = 1
 An axis on which concatenation operation is performed. More...
 
unsigned int InferenceEngine::_size = 0
 Response size.
 
unsigned int InferenceEngine::_k = 1
 K.
 
float InferenceEngine::_alpha = 0
 Alpha coefficient.
 
float InferenceEngine::_beta = 0
 Beta coefficient.
 
bool InferenceEngine::_isAcrossMaps = false
 Flag to specify normalization across feature maps (true) or across channels.
 
int InferenceEngine::axis = 1
 Axis number for a softmax operation. More...
 
float InferenceEngine::bias = 0.f
 Bias for squares sum.
 
int InferenceEngine::across_channels = 0
 Indicate that mean value is calculated across channels.
 
int InferenceEngine::normalize = 1
 Indicate that the result needs to be normalized.
 
float InferenceEngine::negative_slope = 0.0f
 Negative slope is used to takle negative inputs instead of setting them to 0.
 
float InferenceEngine::min_value = 0.0f
 A minimum value.
 
float InferenceEngine::max_value = 1.0f
 A maximum value.
 
eOperation InferenceEngine::_operation = Sum
 A type of the operation to use.
 
std::vector< float > InferenceEngine::coeff
 A vector of coefficients to scale the operands.
 
std::vector< int > InferenceEngine::dim
 A vector of dimensions to be preserved.
 
std::vector< int > InferenceEngine::offset = 0.f
 A vector of offsets for each dimension. More...
 
std::vector< int > InferenceEngine::shape
 A vector of sizes of the shape.
 
int InferenceEngine::num_axes = -1
 A number of first axises to be taken for a reshape.
 
int InferenceEngine::tiles = -1
 A number of copies to be made.
 
unsigned int InferenceEngine::_broadcast = 0
 A flag that indicates if the same value is used for all the features. If false, the value is used pixel wise.
 
std::vector< PortMapInferenceEngine::input_port_map
 Input ports map.
 
std::vector< PortMapInferenceEngine::output_port_map
 Output ports map.
 
std::vector< PortMapInferenceEngine::back_edges
 Back edges map.
 
Body InferenceEngine::body
 A Tensor Iterator body.
 
CellType InferenceEngine::cellType = LSTM
 Direct type of recurrent cell (including subtypes) Description of particular cell semantics is in LSTMCell, GRUCell, RNNCell.
 
int InferenceEngine::hidden_size = 0
 Size of hidden state data. More...
 
float InferenceEngine::clip = 0.0f
 Clip data into range [-clip, clip] on input of activations. More...
 
std::vector< std::string > InferenceEngine::activations
 Activations used inside recurrent cell. More...
 
std::vector< float > InferenceEngine::activation_alpha
 Alpha parameters of activations. More...
 
std::vector< float > InferenceEngine::activation_beta
 Beta parameters of activations. More...
 
Direction InferenceEngine::direction = FWD
 Direction of iteration through sequence dimension.
 
bool InferenceEngine::_channel_shared = false
 A flag that indicates if the same negative_slope value is used for all the features. If false, the value is used pixel wise.
 
float InferenceEngine::power = 1.f
 An exponent value.
 
float InferenceEngine::scale = 1.f
 A scale factor.
 
float InferenceEngine::epsilon = 1e-3f
 A small value to add to the variance estimate to avoid division by zero.
 
float InferenceEngine::alpha = 1.f
 A scale factor of src1 matrix.
 
float InferenceEngine::beta = 1.f
 A scale factor of src3 matrix.
 
bool InferenceEngine::transpose_a = false
 A flag that indicates if the src1 matrix is to be transposed.
 
bool InferenceEngine::transpose_b = false
 A flag that indicates if the src2 matrix is to be transposed.
 
PropertyVector< unsigned int > InferenceEngine::pads_begin
 Size of padding in the beginning of each axis.
 
PropertyVector< unsigned int > InferenceEngine::pads_end
 Size of padding in the end of each axis.
 
ePadMode InferenceEngine::pad_mode = Constant
 Mode of pad operation.
 
float InferenceEngine::pad_value = 0.0f
 A pad value which is used for filling in Constant mode.
 
std::string InferenceEngine::begin_mask
 The begin_mask is a bitmask where bit i being 0 means to ignore the begin value and instead use the default value.
 
std::string InferenceEngine::end_mask
 Analogous to begin_mask.
 
std::string InferenceEngine::ellipsis_mask
 The ellipsis_mask is a bitmask where bit i being 1 means the i-th is actually an ellipsis.
 
std::string InferenceEngine::new_axis_mask
 The new_axis_mask_ is a bitmask where bit i being 1 means the i-th position creates a new 1 dimension shape.
 
std::string InferenceEngine::shrink_axis_mask
 The shrink_axis_mask is a bitmask where bit i being 1 means the i-th position shrinks the dimensionality.
 
unsigned int InferenceEngine::group = 1
 The group of output shuffled channels.
 
unsigned int InferenceEngine::block_size = 1
 The group of output shuffled channels. More...
 
std::vector< size_t > InferenceEngine::_block_shape
 Spatial dimensions blocks sizes.
 
std::vector< size_t > InferenceEngine::_pads_begin
 Size of padding in the beginning of each axis.
 
std::vector< size_t > InferenceEngine::_crops_begin
 It specifies how many elements to crop from the intermediate result across the spatial dimensions.
 
std::vector< size_t > InferenceEngine::_crops_end
 It specifies how many elements to crop from the intermediate result across the spatial dimensions.
 
bool InferenceEngine::with_right_bound = true
 Indicates whether the intervals include the right or the left bucket edge.
 
int InferenceEngine::seq_axis = 1
 The seq_axis dimension in tensor which is partially reversed.
 
int InferenceEngine::batch_axis = 0
 The batch_axis dimension in tensor along which reversal is performed.
 
unsigned int InferenceEngine::depth = 0
 A depth of representation.
 
float InferenceEngine::on_value = 1.f
 The locations represented by indices in input take value on_value.
 
float InferenceEngine::off_value = 0.f
 The locations not represented by indices in input take value off_value.
 
int InferenceEngine::levels = 1
 The number of quantization levels.
 
bool InferenceEngine::keep_dims = true
 The keep_dims dimension in tensor which is partially reversed.
 
std::string InferenceEngine::mode
 The mode could be 'max' or 'min'.
 
std::string InferenceEngine::sort
 top K values sort mode could be 'value' or 'index'
 
bool InferenceEngine::sorted
 A flag indicating whether to sort unique elements.
 
bool InferenceEngine::return_inverse
 A flag indicating whether to return indices of input data elements in the output of uniques.
 
bool InferenceEngine::return_counts
 A flag indicating whether to return a number of occurences for each unique element.
 
bool InferenceEngine::center_point_box = false
 The 'center_point_box' indicates the format of the box data.
 
bool InferenceEngine::sort_result_descending = true
 The 'sort_result_descending' indicates that result will sort descending by score through all batches and classes.
 
int InferenceEngine::flatten = 1
 flatten value
 
int InferenceEngine::grid_w = 0
 Value of grid width.
 
int InferenceEngine::grid_h = 0
 Value of grid height.
 
float InferenceEngine::stride_w = 0.f
 Value of width step between grid cells.
 
float InferenceEngine::stride_h = 0.f
 Value of height step between grid cells.
 
int InferenceEngine::max_rois = 0
 The maximum number of output rois.
 
float InferenceEngine::min_size = 0.f
 Minimium width and height for boxes.
 
float InferenceEngine::nms_threshold = 0.7f
 Non max suppression threshold.
 
int InferenceEngine::pre_nms_topn = 1000
 Maximum number of anchors selected before nms.
 
int InferenceEngine::post_nms_topn = 1000
 Maximum number of anchors selected after nms.
 

Detailed Description

a header file for internal Layers structure to describe layers information

Macro Definition Documentation

§ DEFINE_PROP

#define DEFINE_PROP (   prop_name)
Value:
PropertyVector<unsigned int> prop_name; \
unsigned int& prop_name##_x = prop_name.at(X_AXIS); \
unsigned int& prop_name##_y = prop_name.at(Y_AXIS)

convinenent way to declare property with backward compatibility to 2D members

Variable Documentation

§ @1

struct { ... }

This is an internal common Layer parameter parsing arguments.

Deprecated:
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1

§ @3

class { ... }

This is a base abstraction Layer - all DNN Layers inherit from this class.

Deprecated:
Migrate to IR v10 and work with ngraph::Function directly. The method will be removed in 2021.1

§ _axis

unsigned int InferenceEngine::_axis = 1

An axis on which concatenation operation is performed.

An axis on which split operation is performed.

§ _kernel

PropertyVector<unsigned int> InferenceEngine::_kernel

A convolution kernel array [X, Y, Z, ...].

Pooling kernel array [X, Y, Z, ...].

§ _padding

PropertyVector<unsigned int> InferenceEngine::_padding

A convolution paddings begin array [X, Y, Z, ...].

Pooling paddings begin array [X, Y, Z, ...].

§ _pads_end

std::vector<size_t> InferenceEngine::_pads_end

A convolution paddings end array [X, Y, Z, ...].

Size of padding in the end of each axis.

Pooling paddings end array [X, Y, Z, ...].

§ _stride

PropertyVector<unsigned int> InferenceEngine::_stride

A convolution strides array [X, Y, Z, ...].

Pooling strides array [X, Y, Z, ...].

§ activation_alpha

std::vector<float> InferenceEngine::activation_alpha

Alpha parameters of activations.

Respective to activation list.

§ activation_beta

std::vector<float> InferenceEngine::activation_beta

Beta parameters of activations.

Respective to activation list.

§ activations

std::vector<std::string> InferenceEngine::activations

Activations used inside recurrent cell.

Valid values: sigmoid, tanh, relu

§ axis

int InferenceEngine::axis = 1

Axis number for a softmax operation.

The axis dimension in tensor which is top K values are picked.

Define the shape of output tensor.

The axis in tensor to shuffle channels.

The axis in Dictionary to gather Indexes from.

An axis by which iteration is performed.

An index of the axis to tile.

A number of axis to be taken for a reshape.

A vector of dimensions for cropping.

axis=0 means first input/output data blob dimension is sequence axis=1 means first input/output data blob dimension is batch

§ block_size

unsigned int InferenceEngine::block_size = 1

The group of output shuffled channels.

The group of output Space To Depth.

§ clip

float InferenceEngine::clip = 0.0f

Clip data into range [-clip, clip] on input of activations.

clip==0.0f means no clipping

§ hidden_size

int InferenceEngine::hidden_size = 0

Size of hidden state data.

In case of batch output state tensor will have shape [N, hidden_size]

§ offset

float InferenceEngine::offset = 0.f

A vector of offsets for each dimension.

An offset value.

§ precision

Precision InferenceEngine::precision

Layer precision.

Layer base operating precision.