►Nngraph | The Intel nGraph C++ API |
►Ndescriptor | Descriptors are compile-time representations of objects that will appear at run-time |
CInput | |
COutput | |
CTensor | Compile-time descriptor of a first-class value that is a tensor |
►Nelement | |
CType | |
►Nonnx_import | ONNX importer features namespace. Functions in this namespace make it possible to use ONNX models |
►Nerror | |
►Nnode | |
CUnknownAttribute | |
CNode | |
CONNXModelEditor | A class representing a set of utilities allowing modification of an ONNX model |
►Nop | Ops used in graph-building |
►Nutil | |
►Nerror | |
CUnknownActivationFunction | |
CActivationFunction | Class representing activation function used in RNN cells |
CArithmeticReduction | Abstract base class for arithmetic reduction operations, i.e., operations where chosen axes of the input tensors are eliminated (reduced out) by repeated application of a particular binary arithmetic operation |
CArithmeticReductionKeepDims | |
CBinaryElementwiseArithmetic | Abstract base class for elementwise binary arithmetic operations, i.e., operations where the same scalar binary arithmetic operation is applied to each corresponding pair of elements in the two input tensors. Implicit broadcast of input tensors is supported through one of the AutoBroadcast modes |
CBinaryElementwiseComparison | Abstract base class for elementwise binary comparison operations, i.e., operations where the same scalar binary comparison operation is applied to each corresponding pair of elements in two input tensors. Implicit broadcast of input tensors is supported through one of the AutoBroadcast modes |
CBinaryElementwiseLogical | Abstract base class for elementwise binary logical operations, i.e., operations where the same scalar binary logical operation is applied to each corresponding pair of elements in two boolean input tensors. Implicit broadcast of input tensors is supported through one of the AutoBroadcast modes |
CBroadcastBase | |
CEmbeddingBagOffsetsBase | Returns embeddings for given indices |
CEmbeddingBagPackedBase | Returns embeddings for given indices |
CIndexReduction | |
CLogicalReduction | Abstract base class for logical reduction operations, i.e., operations where chosen axes of the input tensors are eliminated (reduced out) by repeated application of a particular binary logical operation |
CLogicalReductionKeepDims | |
Coi_pair | |
COpAnnotations | Base class for annotations added to graph ops |
CRNNCellBase | Base class for all recurrent network cells |
CScatterBase | Base class for ScatterXXX operators |
CScatterNDBase | Base class for ScatterNDXXX operators |
►CSubGraphOp | Abstract base class for sub-graph based ops, i.e ops that have sub-graph |
CBodyOutputDescription | Produces an output from a specific iteration |
CConcatOutputDescription | Produces an output by concatenating an output from each iteration |
CInputDescription | Describes a connection between a SubGraphOp input and the body |
CInvariantInputDescription | Describes a body input initialized from a SubGraphOp input on the first iteration, and invariant thereafter |
CMergedInputDescription | Describes a body input initialized from a SubGraphOp input on the first iteration, and then a body output thereafter |
COutputDescription | Describes how a SubGraphOp output is produced from the body |
CSliceInputDescription | Describes a body input formed from slices of an input to SubGraphOp |
CUnaryElementwiseArithmetic | Abstract base class for elementwise unary arithmetic operations, i.e., operations where the same scalar arithmetic operation is applied to each element |
►Nv0 | |
CAbs | Elementwise absolute value operation |
CAcos | Elementwise inverse cosine (arccos) operation |
CAsin | Elementwise inverse sine (arcsin) operation |
CAtan | Elementwise inverse tangent (arctan) operation |
CBatchNormInference | |
CCeiling | Elementwise ceiling operation |
CClamp | Performs a clipping operation on all elements of the input node |
CConcat | Concatenation operation |
CConstant | Class for constants |
CConvert | Elementwise type conversion operation |
CCos | Elementwise cosine operation |
CCosh | Elementwise hyperbolic cosine (cosh) operation |
CCTCGreedyDecoder | |
CCumSum | Tensor cumulative sum operation |
CDepthToSpace | DepthToSpace permutes data from the depth dimension of the input blob into spatial dimensions |
CDetectionOutput | Layer which performs non-max suppression to generate detection output using location and confidence predictions |
CElu | Exponential Linear Unit x < 0 => f(x) = alpha * (exp(x) - 1.) x >= 0 => f(x) = x |
CErf | |
CExp | Elementwise natural exponential (exp) operation |
CFakeQuantize | Class performing element-wise linear quantization |
CFloor | Elementwise floor operation |
CGelu | Gaussian Error Linear Unit f(x) = 0.5 * x * (1 + erf( x / sqrt(2) ) |
CGRN | Global Response Normalization with L2 norm (across channels only) |
CHardSigmoid | Parameterized, bounded sigmoid-like, piecewise linear function. min(max(alpha*x + beta, 0), 1) |
CInterpolateAttrs | Structure that specifies attributes for interpolation |
CInterpolate | Layer which performs bilinear interpolation |
CLog | Elementwise natural log operation |
CLRN | Elementwise Local Response Normalization (LRN) operation |
CLSTMCell | Class for single lstm cell node |
CLSTMSequence | Class for lstm sequence node |
CMatMul | Operator performing Matrix Multiplication |
CMVN | Operator performing Mean Variance Normalization |
CNegative | Elementwise negative operation |
CNormalizeL2 | Normalization input tensor with L2 norm |
CParameter | A function parameter |
CPRelu | Parametrized Relu x < 0 => f(x) = x * slope x >= 0 => f(x) = x |
CPriorBox | Layer which generates prior boxes of specified sizes normalized to input image size |
CPriorBoxClustered | Layer which generates prior boxes of specified sizes normalized to input image size |
CProposal | |
CPSROIPooling | |
CRange | Range operation, analogous to range() in Python |
CRegionYolo | |
CRelu | Elementwise Relu operation |
CReorgYolo | |
CResult | |
CReverseSequence | |
CRNNCell | Class for single RNN cell node |
CROIPooling | |
CSelu | Performs a SELU activation function on all elements of the input node |
CShapeOf | Operation that returns the shape of its input argument as a tensor |
CShuffleChannels | Permutes data in the channel dimension of the input |
CSigmoid | |
CSign | Elementwise sign operation |
CSin | Elementwise sine operation |
CSinh | Elementwise hyperbolic sine (sinh) operation |
CSpaceToDepth | SpaceToDepth permutes input tensor blocks of spatial data into depth dimension |
CSqrt | Elementwise square root operation |
CSquaredDifference | Calculates an element-wise squared difference between two tensors |
CSqueeze | |
CTan | Elementwise tangent operation |
CTanh | Elementwise hyperbolic tangent operation |
CTensorIterator | Iterate a body over tensors, accumulating into tensors |
CTile | Dynamic Tiling operation which repeats a tensor multiple times along each dimension |
CUnsqueeze | |
CXor | Elementwise logical-xor operation |
►Nv1 | |
CAdd | Elementwise addition operation |
CLogicalAnd | Elementwise logical-and operation |
CAvgPool | Batched average pooling operation |
CBatchToSpace | BatchToSpace permutes data from the batch dimension of the data tensor into spatial dimensions |
CBinaryConvolution | |
CBroadcast | Operation which "adds" axes to an input tensor, replicating elements from the input as needed along the new axes |
CConvertLike | Elementwise type conversion operation |
CConvolution | Batched convolution operation, with optional window dilation and stride |
CConvolutionBackpropData | Data batch backprop for batched convolution operation |
CDeformableConvolution | DeformableConvolution operation |
CDeformablePSROIPooling | |
CDivide | Elementwise division operation |
CEqual | Elementwise is-equal operation |
CFloorMod | Elementwise FloorMod operation |
CGather | Gather slices from axis of params according to indices |
CGatherTree | Generates the complete beams from the ids per each step and the parent beam ids |
CGreater | Elementwise greater-than operation |
CGreaterEqual | Elementwise greater-than-or-equal operation |
CGroupConvolution | Batched convolution operation, with optional window dilation and stride |
CGroupConvolutionBackpropData | Data batch backprop for batched convolution operation |
CLess | Elementwise less-than operation |
CLessEqual | Elementwise less-than-or-equal operation |
CReduceMax | |
CMaxPool | Batched max pooling operation |
CMaximum | Elementwise maximum operation |
CReduceMin | |
CMinimum | Elementwise minimum operation |
CMod | Mod returns an element-wise division reminder with two given tensors applying multi-directional broadcast rules |
CMultiply | Elementwise multiplication operation |
CNonMaxSuppression | Elementwise addition operation |
CLogicalNot | Elementwise logical negation operation |
CNotEqual | Elementwise not-equal operation |
COneHot | |
CLogicalOr | Elementwise logical-or operation |
CPad | Generic padding operation |
CPower | Elementwise exponentiation operation |
CReduceLogicalAnd | Performs a reduction using "logical and" |
CReduceLogicalOr | Performs a reduction using "logical or" |
CReduceMean | |
CReduceProd | Product reduction operation |
CReduceSum | Tensor sum operation |
CReshape | Tensor dynamic reshape operation |
CReverse | |
CSelect | Elementwise selection operation |
CSoftmax | |
CSpaceToBatch | SpaceToBatch permutes data tensor blocks of spatial data into batch dimension |
CSplit | Splits the input tensor into a list of equal sized tensors |
CStridedSlice | Takes a slice of an input tensor, i.e., the sub-tensor that resides within a bounding box, optionally with stride |
CSubtract | Elementwise subtraction operation |
CTopK | Computes indices and values of the k maximum/minimum values for each slice along specified axis |
CTranspose | Tensor transpose operation |
CVariadicSplit | VariadicSplit operation splits an input tensor into pieces along some axis. The pieces may have variadic lengths depending on "split_lengths" attribute |
CLogicalXor | Elementwise logical-xor operation |
►Nv3 | |
CAcosh | Elementwise inverse hyperbolic cos operation |
CAsinh | Elementwise inverse hyperbolic sin operation |
CAssign | Assign operation sets an input value to the variable with variable_id |
CAtanh | Elementwise inverse hyperbolic tangent operation |
CBroadcast | Operation which "adds" axes to an input tensor, replicating elements from the input as needed along the new axes |
CBucketize | Operation that bucketizes the input based on boundaries |
CEmbeddingSegmentsSum | Returns embeddings for given indices |
CEmbeddingBagOffsetsSum | Returns embeddings for given indices |
CEmbeddingBagPackedSum | Returns embeddings for given indices |
CExtractImagePatches | |
CGRUCell | Class for GRU cell node |
CNonMaxSuppression | NonMaxSuppression operation |
CNonZero | NonZero operation returning indices of non-zero elements in the input tensor |
CReadValue | ReadValue operation creates the variable with variable_id and returns value of this variable |
CROIAlign | |
CScatterElementsUpdate | |
CScatterNDUpdate | Add updates to slices from inputs addressed by indices |
CScatterUpdate | Set new values to slices from data addressed by indices |
CShapeOf | Operation that returns the shape of its input argument as a tensor |
CTopK | Computes indices and values of the k maximum/minimum values for each slice along specified axis |
►Nv4 | |
CCTCLoss | |
CHSwish | A HSwish Activation Function f(x) = x * min(max(x + 3, 0), 6) / 6 or f(x) = x * min(ReLU(x + 3), 6) / 6 |
►CInterpolate | |
CInterpolateAttrs | |
CLSTMCell | Class for single lstm cell node |
CMish | A Self Regularized Non-Monotonic Neural Activation Function f(x) = x * tanh(log(exp(x) + 1.)) |
CNonMaxSuppression | NonMaxSuppression operation |
CProposal | |
CRange | Range operation, analogous to arange() in Numpy |
CReduceL1 | Reduction operation using L1 norm: L1(x) = sum(abs(x)) if all dimensions are specified for the normalisation |
CReduceL2 | Reduction operation using L2 norm: |
CSoftPlus | A Self Regularized Non-Monotonic Neural Activation Function f(x) = ln(exp(x) + 1.) |
CSwish | A Swish Activation Function f(x) = x / (1.0 + exp(-beta * x)) or f(x) = x * sigmoid(beta * x) |
►Nv5 | |
CBatchNormInference | |
CGatherND | GatherND operation |
CGRUSequence | |
CHSigmoid | A HSigmoid Activation Function f(x) = min(max(x + 3, 0), 6) / 6 or f(x) = min(ReLU(x + 3), 6) / 6 |
CLogSoftmax | |
►CLoop | Iterate a body over tensors, accumulating into tensors |
CSpecialBodyPorts | Allows to define the purpose of inputs/outputs in the body |
CLSTMSequence | Class for lstm sequence node |
CNonMaxSuppression | NonMaxSuppression operation |
CRNNSequence | |
CRound | Elementwise round operation. The output is round to the nearest integer for each value. In case of halfs, the rule is defined in attribute 'mode': 'HALF_TO_EVEN' - round halfs to the nearest even integer. 'HALF_AWAY_FROM_ZERO': - round in such a way that the result heads away from zero |
►Nv6 | |
CAssign | Assign operation sets an input value to the variable with variable_id |
CCTCGreedyDecoderSeqLen | Operator performing CTCGreedyDecoder |
►CExperimentalDetectronDetectionOutput | An operation ExperimentalDetectronDetectionOutput, according to the repository https://github.com/openvinotoolkit/training_extensions (see pytorch_toolkit/instance_segmentation/segmentoly/rcnn/detection_output.py) |
CAttributes | Structure that specifies attributes of the operation |
►CExperimentalDetectronGenerateProposalsSingleImage | An operation ExperimentalDetectronGenerateProposalsSingleImage, according to the repository https://github.com/openvinotoolkit/training_extensions (see pytorch_toolkit/instance_segmentation/segmentoly/rcnn/proposal.py) |
CAttributes | Structure that specifies attributes of the operation |
►CExperimentalDetectronPriorGridGenerator | An operation ExperimentalDetectronPriorGridGenerator, according to the repository https://github.com/openvinotoolkit/training_extensions (see pytorch_toolkit/instance_segmentation/segmentoly/rcnn/prior_box.py) |
CAttributes | Structure that specifies attributes of the operation |
►CExperimentalDetectronROIFeatureExtractor | An operation ExperimentalDetectronROIFeatureExtractor, according to the repository https://github.com/openvinotoolkit/training_extensions (see the file pytorch_toolkit/instance_segmentation/segmentoly/rcnn/roi_feature_extractor.py) |
CAttributes | Structure that specifies attributes of the operation |
CExperimentalDetectronTopKROIs | An operation ExperimentalDetectronTopKROIs, according to the repository https://github.com/openvinotoolkit/training_extensions (see pytorch_toolkit/instance_segmentation/segmentoly/rcnn/roi_feature_extractor.py) |
CGatherElements | GatherElements operation |
CMVN | Operator performing Mean Variance Normalization |
CReadValue | ReadValue operation gets an input value from the variable with variable_id and returns it as an output |
CAssignBase | |
CDetectionOutputAttrs | |
COp | Root of all actual ops |
CPriorBoxAttrs | |
CPriorBoxClusteredAttrs | |
CProposalAttrs | |
CReadValueBase | |
CSink | Root of nodes that can be sink nodes |
CAutoBroadcastSpec | Implicit broadcast specification |
CBroadcastModeSpec | Implicit broadcast specification |
►Npass | |
CConstantFolding | Constant folding iterates over the function and tries to evaluate nodes with constant inputs. Such nodes are then replaced with new Constants containing the result of a folded operation |
CConvertFP32ToFP16 | |
CMatcherPass | MatcherPass is a basic block for pattern based transformations. It describes pattern and action that is applied if pattern is matched |
CGraphRewrite | GraphRewrite is a container for MatcherPasses that allows to run them on Function in efficient way |
CRecurrentGraphRewrite | |
CLowLatency | The transformation finds all TensorIterator layers in the network, processes all back edges that describe a connection between Result and Parameter of the TensorIterator body,and inserts ReadValue layer between Parameter and the next layers after this Parameter, and Assign layer after the layers before the Result layer. Supported platforms: CPU, GNA |
CManager | |
CPassBase | |
CFunctionPass | |
CPassConfig | Class representing a transformations config that is used for disabling/enabling transformations registered inside pass::Manager and also allows to set callback for all transformations or for particular transformation |
CValidate | The Validate pass performs sanity checks on attributes and inputs, and computes output shapes and element types for all computation nodes in a given computation graph |
CVisualizeTree | |
►Npattern | |
►Nop | |
CAny | |
CAnyOf | |
CAnyOutput | Matches any output of a node |
CBranch | |
CCapture | |
CLabel | |
COr | |
CPattern | |
CSkip | |
CTrue | The match always succeeds |
CWrapType | |
CMatcherState | |
CMatcher | |
CRecurrentMatcher | |
►Nruntime | The objects used for executing the graph |
CAlignedBuffer | Allocates a block of memory on the specified alignment. The actual size of the allocated memory is larger than the requested size by the alignment, so allocating 1 byte on 64 byte alignment will allocate 65 bytes |
CHostTensor | |
CSharedBuffer | SharedBuffer class to store pointer to pre-acclocated buffer |
CTensor | |
CValueAccessor | Provides access to an attribute of type AT as a value accessor type VAT |
CValueAccessor< void > | ValueAccessor<void> provides an accessor for values that do not have get/set methonds via AttributeVistor.on_adapter |
CValueAccessor< void * > | |
CDirectValueAccessor | |
CIndirectScalarValueAccessor | |
CIndirectVectorValueAccessor | |
CAttributeAdapter | An AttributeAdapter "captures" an attribute as an AT& and makes it available as a ValueAccessor<VAT> |
CEnumAttributeAdapterBase | Access an enum via a string |
CVisitorAdapter | Adapters will see visitor |
CAttributeAdapter< float > | |
CAttributeAdapter< double > | Access a double as a double |
CAttributeAdapter< std::string > | Access a string as a string |
CAttributeAdapter< bool > | Access a bool as a bool |
CAttributeAdapter< int8_t > | Access an int8_t and an int64_t |
CAttributeAdapter< int16_t > | Access an int16_t as an int64_t |
CAttributeAdapter< int32_t > | Access an int32_t as an int64_t |
CAttributeAdapter< int64_t > | Access an int64_t as an int64_t |
CAttributeAdapter< uint8_t > | Access a uint8_t as an int64_t |
CAttributeAdapter< uint16_t > | Access a uint16_t as an int64_t |
CAttributeAdapter< uint32_t > | Access a uint32_t as an int64_t |
CAttributeAdapter< uint64_t > | Access a uint64_t as an int64_t |
CAttributeAdapter< std::vector< int8_t > > | Access a vector<int8_t> |
CAttributeAdapter< std::vector< int16_t > > | Access a vector<int16_t> |
CAttributeAdapter< std::vector< int32_t > > | Access a vector<int32_t> |
CAttributeAdapter< std::vector< int64_t > > | Access a vector<int64_t> |
CAttributeAdapter< std::vector< uint8_t > > | Access a vector<uint8_t> |
CAttributeAdapter< std::vector< uint16_t > > | Access a vector<uint16_t> |
CAttributeAdapter< std::vector< uint32_t > > | Access a vector<uint32_t> |
CAttributeAdapter< std::vector< uint64_t > > | Access a vector<uint64_t> |
CAttributeAdapter< std::vector< float > > | Access a vector<float> |
CAttributeAdapter< std::vector< double > > | Access a vector<double> |
CAttributeAdapter< std::vector< std::string > > | Access a vector<string> |
CAttributeVisitor | Visits the attributes of a node, primarily for serialization-like tasks |
CAxisSet | A set of axes |
CAttributeAdapter< AxisSet > | |
CAxisVector | A vector of axes |
CAttributeAdapter< AxisVector > | |
CCheckLocInfo | |
CCheckFailure | Base class for check failure exceptions |
CCoordinate | Coordinates for a tensor element |
CAttributeAdapter< Coordinate > | |
CCoordinateDiff | A difference (signed) of tensor element coordinates |
CAttributeAdapter< CoordinateDiff > | |
CDimension | Class representing a dimension, which may be dynamic (undetermined until runtime), in a shape or shape-like object |
CAttributeAdapter< reduction::Type > | |
CEnumNames | |
►CEvaluator | Execute handlers on a subgraph to compute values |
CExecuteInst | All arguments have been handled; execute the node handler |
CInst | Intstructions for evaluations state machine |
CValueInst | Ensure value has been analyzed |
Cngraph_error | Base error for ngraph runtime errors |
Cunsupported_op | |
CFactoryRegistry | Registry of factories that can construct objects derived from BASE_TYPE |
CFactoryAttributeAdapter | |
CFunction | A user-defined function |
CAttributeAdapter< std::shared_ptr< Function > > | |
CInterval | Interval arithmetic |
CConstString | |
CLogHelper | |
CLogger | |
CNullLogger | |
CInput | |
COutput | |
CNode | |
CRawNodeOutput | |
CAttributeAdapter< std::shared_ptr< Node > > | Visits a reference to a node that has been registered with the visitor |
CAttributeAdapter< NodeVector > | |
CNodeValidationFailure | |
CInput< Node > | A handle for one of a node's inputs |
CInput< const Node > | A handle for one of a node's inputs |
COutput< Node > | A handle for one of a node's outputs |
COutput< const Node > | |
CAttributeAdapter< op::v1::BinaryConvolution::BinaryConvolutionMode > | |
CAttributeAdapter< op::v0::DepthToSpace::DepthToSpaceMode > | |
CAttributeAdapter< op::v0::Interpolate::InterpolateMode > | |
CAttributeAdapter< op::v4::Interpolate::InterpolateMode > | |
CAttributeAdapter< op::v4::Interpolate::CoordinateTransformMode > | |
CAttributeAdapter< op::v4::Interpolate::NearestMode > | |
CAttributeAdapter< op::v4::Interpolate::ShapeCalcMode > | |
CAttributeAdapter< op::v5::Loop::SpecialBodyPorts > | |
CAttributeAdapter< op::LSTMWeightsFormat > | |
CAttributeAdapter< op::MVNEpsMode > | |
CAttributeAdapter< op::v1::NonMaxSuppression::BoxEncodingType > | |
CAttributeAdapter< op::v3::NonMaxSuppression::BoxEncodingType > | |
CAttributeAdapter< op::v5::NonMaxSuppression::BoxEncodingType > | |
CAttributeAdapter< ParameterVector > | |
CAttributeAdapter< ResultVector > | |
CAttributeAdapter< op::v1::Reverse::Mode > | |
CAttributeAdapter< op::v3::ROIAlign::PoolingMode > | |
CAttributeAdapter< op::v5::Round::RoundMode > | |
CAttributeAdapter< op::v0::SpaceToDepth::SpaceToDepthMode > | |
CAttributeAdapter< op::PadMode > | |
CAttributeAdapter< op::PadType > | |
CAttributeAdapter< op::RoundingType > | |
CAttributeAdapter< op::AutoBroadcastType > | |
CAttributeAdapter< op::BroadcastType > | |
CAttributeAdapter< op::EpsMode > | |
CAttributeAdapter< op::TopKSortType > | |
CAttributeAdapter< op::TopKMode > | |
CAttributeAdapter< op::AutoBroadcastSpec > | |
CAttributeAdapter< op::BroadcastModeSpec > | |
CAttributeAdapter< op::RecurrentSequenceDirection > | |
CAttributeAdapter< std::vector< std::shared_ptr< ngraph::op::util::SubGraphOp::InputDescription > > > | |
CAttributeAdapter< std::vector< std::shared_ptr< ngraph::op::util::SubGraphOp::OutputDescription > > > | |
CVariableInfo | |
CVariable | |
CAttributeAdapter< std::shared_ptr< Variable > > | |
COpSet | Run-time opset information |
CPartialShape | Class representing a shape that may be partially or totally dynamic |
CAttributeAdapter< PartialShape > | |
CAttributeAdapter< std::shared_ptr< runtime::AlignedBuffer > > | |
CShape | Shape for a tensor |
CAttributeAdapter< Shape > | |
CSlicePlan | |
CStrides | Strides for a tensor |
CAttributeAdapter< Strides > | |
Cbfloat16 | |
CAttributeAdapter< element::Type_t > | |
CAttributeAdapter< element::Type > | |
Celement_type_traits | |
Celement_type_traits< element::Type_t::boolean > | |
Celement_type_traits< element::Type_t::bf16 > | |
Celement_type_traits< element::Type_t::f16 > | |
Celement_type_traits< element::Type_t::f32 > | |
Celement_type_traits< element::Type_t::f64 > | |
Celement_type_traits< element::Type_t::i8 > | |
Celement_type_traits< element::Type_t::i16 > | |
Celement_type_traits< element::Type_t::i32 > | |
Celement_type_traits< element::Type_t::i64 > | |
Celement_type_traits< element::Type_t::u1 > | |
Celement_type_traits< element::Type_t::u8 > | |
Celement_type_traits< element::Type_t::u16 > | |
Celement_type_traits< element::Type_t::u32 > | |
Celement_type_traits< element::Type_t::u64 > | |
Cfloat16 | |
CDiscreteTypeInfo | |
Cstopwatch | |
CEnumMask | |
CVariant | |
CVariantImpl | |
CVariantWrapper | |
CVariantWrapper< std::string > | |
CVariantWrapper< int64_t > | |
►Nstd | |
Cnumeric_limits< ngraph::bfloat16 > | |
Cnumeric_limits< ngraph::float16 > | |
Chash< ngraph::DiscreteTypeInfo > | |