Group Execution model utilities

group ov_dev_exec_model

Contains ExecutionNode and its properties.

Variables

static const char ORIGINAL_NAMES[] = "originalLayersNames"

Used to get a string of layer names separated by a comma from the original IR, which were fused/merged to the current executable primitive.

static const char IMPL_TYPE[] = "primitiveType"

Used to get a type of the executable primitive.

static const char OUTPUT_PRECISIONS[] = "outputPrecisions"

Used to get output precisions of the executable primitive.

static const char PERF_COUNTER[] = "execTimeMcs"

Used to get a value of execution time of the executable primitive.

static const char OUTPUT_LAYOUTS[] = "outputLayouts"

Used to get output layouts of primitive.

static const char EXECUTION_ORDER[] = "execOrder"

Used to get an execution order of primitive.

static const char LAYER_TYPE[] = "layerType"

Used to get a type of primitive.

static const char RUNTIME_PRECISION[] = "runtimePrecision"

Used to get runtime precision of the executable primitive.

class ExecutionNode : public ov::op::Op
#include <exec_model_info.hpp>

The Execution node which is used to represent node in execution graph.

It contains the following type of information in node runtime information:

  • ExecGraphInfoSerialization::ORIGINAL_NAMES

  • ExecGraphInfoSerialization::IMPL_TYPE

  • ExecGraphInfoSerialization::OUTPUT_PRECISIONS

  • ExecGraphInfoSerialization::PERF_COUNTER

  • ExecGraphInfoSerialization::OUTPUT_LAYOUTS

  • ExecGraphInfoSerialization::EXECUTION_ORDER

  • ExecGraphInfoSerialization::LAYER_TYPE

  • ExecGraphInfoSerialization::RUNTIME_PRECISION

Public Functions

ExecutionNode()

A default constructor with no node inputs and 0 output ports.

ExecutionNode(const ov::OutputVector &arguments, size_t output_size = 1)

Constructs a new execution node with a given parameters.

Parameters
  • arguments[in] Inputs nodes

  • output_size[in] A number of output ports

std::shared_ptr<ov::Node> clone_with_new_inputs(const ov::OutputVector &inputs) const override

Creates a new execution node with the same state, but different input nodes.

Parameters

inputs[in] The input nodes

Returns

A newly created execution node

virtual bool visit_attributes(ov::AttributeVisitor&) override

Visits attributes of the node.

Parameters

visitor[in] An attribute visitor

Returns

Returns true if an operation has completed successfully

namespace ov

transformation aligns elementwise constant inputs ranks with its output rank

A namespace with const values for Execution Graph parameters names.

Executable Model Info is represented in ov::Model format with general ExecutionNode nodes inside including connections between the nodes. Each node describes an executable hardware-specific primitive and stores its parameters within ExecutionNode::get_rt_info map. There is a list of general keys for the parameters map.

OpenVINO C++ API.

Resolves transpose_b key from MatMul operation if corresponding input is constant or FakeQuantize by inserting Transpose.

Functions

LP_TRANSFORMATIONS_API void mark_as_bias (const std::shared_ptr< Node > &node)
LP_TRANSFORMATIONS_API bool marked_as_bias (const std::shared_ptr< const Node > &node)
std::ostream &operator<<(std::ostream &out, const Mask &mask)
Mask::Ptr getMask(const Output<const Node> &output)
Mask::Ptr getMask(const Output<Node> &output)
void setMask(Output<Node> output, const Mask::Ptr &mask)
void setMask(Input<Node> node, const Mask::Ptr &mask)
void mark_as_decompression(const std::shared_ptr<Node> &node)
void unmark_as_decompression(const std::shared_ptr<Node> &node)
bool is_decompression(const std::shared_ptr<Node> &node)
void mark_as_dequantization_node(const std::shared_ptr<Node> &node)
bool is_dequantization_node(const std::shared_ptr<Node> &node)
void disable_fp16_compression(const std::shared_ptr<Node> &node)
void enable_fp16_compression(const std::shared_ptr<Node> &node)
bool fp16_compression_is_disabled(const std::shared_ptr<const Node> &node)
void postpone_fp16_compression(RTMap &rt_info)
bool is_fp16_compression_postponed(const RTMap &rt_info)
void do_not_postpone_fp16_compression(RTMap &rt_info)
std::string getFusedNames(const std::shared_ptr<ov::Node> &node)

getFusedNames return string with operation names separated by coma in alphabetical order

Parameters

node[in] The node will be used to get FusedNames attribute

std::vector<std::string> getFusedNamesVector(const std::shared_ptr<ov::Node> &node)

getFusedNamesVector return vector of fused names sorted in alphabetical order

Parameters

node[in] The node will be used to get FusedNames attribute

Returns

vector of strings

void mark_shape_subgraph(const std::shared_ptr<Node> &node)
void unmark_shape_subgraph(const std::shared_ptr<Node> &node)
bool is_shape_subgraph(const std::shared_ptr<const Node> &node)
void enable_keep_const_precision(const std::shared_ptr<Node> &node)
void disable_keep_const_precision(const std::shared_ptr<Node> &node)
bool is_keep_const_precision(const std::shared_ptr<const Node> &node)
bool has_nms_selected_indices(const Node *node)
void set_nms_selected_indices(Node *node)
void disable_divide_conversion(const std::shared_ptr<Node> &node)
void enable_divide_conversion(const std::shared_ptr<Node> &node)
bool divide_is_nonconvertible(const std::shared_ptr<Node> &node)
inline bool has_old_api_map_element_type(const std::shared_ptr<Node> &node)
inline OldApiMapElementType get_old_api_map_element_type(const std::shared_ptr<Node> &node)
inline void set_old_api_map_element_type(const std::shared_ptr<Node> &node, const OldApiMapElementType &old_api_map)
inline bool has_old_api_map_order(const std::shared_ptr<Node> &node)
inline OldApiMapOrder get_old_api_map_order(const std::shared_ptr<Node> &node)
inline void set_old_api_map_order(std::shared_ptr<Node> &node, const OldApiMapOrder &old_api_map)
void set_original_precision_attribute(const std::shared_ptr<Node> &node, const element::Type_t original_precision)
void reset_original_precision_attribute(const std::shared_ptr<Node> &node)
element::Type_t get_original_precision(const std::shared_ptr<Node> &node)
bool is_preprocesing_node(const std::shared_ptr<Node> &node)
void set_is_preprocessing_node(std::shared_ptr<Node> node)
std::string getPrimitivesPriority(const std::shared_ptr<Node> &node)

getPrimitivesPriority return string with primitive priorities value

Parameters

node[in] The node will be used to get PrimitivesPriority attribute

bool has_strides_prop(const Input<Node> &node)
ov::Strides get_strides_prop(const Input<Node> &node)
void insert_strides_prop(Input<Node> &node, const Strides &strides)
void remove_strides_prop(Input<Node> &node)
void mark_as_no_sinking_node(const std::shared_ptr<Node> &node)
void reset_no_sinking_attribute(const std::shared_ptr<Node> &node)
bool is_sinking_node(const std::shared_ptr<Node> &node)
bool is_sinking_node(const Node *node)
bool is_sinking_node(ov::Output<ov::Node> output)
std::shared_ptr<ov::MappedMemory> load_mmap_object(const std::string &path)

Returns mapped memory for a file from provided path. Instead of reading files, we can map the memory via mmap for Linux in order to avoid time-consuming reading and reduce memory consumption.

Parameters

path – Path to a file which memory will be mmaped.

Returns

MappedMemory shared ptr object which keep mmaped memory and control the lifetime.

template<typename A, typename B>
A copy_from(B &b)
OPENVINO_API std::ostream & operator<< (std::ostream &s, const AxisSet &axis_set)
OPENVINO_API std::ostream & operator<< (std::ostream &s, const AxisVector &axis_vector)
OPENVINO_API std::ostream & operator<< (std::ostream &s, const Coordinate &coordinate)
OPENVINO_API std::ostream & operator<< (std::ostream &s, const CoordinateDiff &coordinate_diff)
OPENVINO_API std::ostream & operator<< (std::ostream &str, const Dimension &dimension)

Insert a human-readable representation of a dimension into an output stream.

Inserts the string ? if dimension is dynamic; else inserts dimension.get_length().

Parameters
  • str – The output stream targeted for insertion.

  • dimension – The dimension to be inserted into str.

Returns

A reference to str after insertion.

template<typename Type, typename Value>
std::enable_if<std::is_convertible<Value, std::string>::value, Type>::type as_enum(const Value &value)

Returns the enum value matching the string.

template<typename Value>
const std::string &as_string(Value value)

Returns the string matching the enum value.

static inline std::ostream &write_all_to_stream(std::ostream &str)
template<typename T, typename ...TS>
std::ostream &write_all_to_stream(std::ostream &str, T &&arg, TS&&... args)
template<class T, typename std::enable_if<!std::is_same<typename std::decay<T>::type, std::string>::value>::type* = nullptr>
std::string stringify(T &&arg)
template<class T, typename std::enable_if<std::is_same<typename std::decay<T>::type, std::string>::value>::type* = nullptr>
T &stringify(T &&arg)
void create_extensions(std::vector<Extension::Ptr>&)

The entry point for library with OpenVINO extensions.

Parameters

vector – of extensions

OPENVINO_API void traverse_nodes (const std::shared_ptr< const Model > &p, const std::function< void(const std::shared_ptr< Node > &)> &f)
OPENVINO_API void traverse_nodes (const Model *p, const std::function< void(const std::shared_ptr< Node > &)> &f)
OPENVINO_API void traverse_nodes (const NodeVector &subgraph_results, const std::function< void(const std::shared_ptr< Node > &)> &f, const NodeVector &subgraph_params={})

Visit each node in a sub-graph of the entire graph.

Traverses a sub-graph starting from subgraph_results moving up towards parameter nodes. Traversal stops if it hits a node in subgraph_params.

Most useful for finding parameters of a graph directly from the result nodes and not from function parameters or extracting a subgraph relevant to the computation of certain outputs

Parameters
  • subgraph_results – The output nodes of the sub-graph

  • fModel to execute at each node in the traversal

  • subgraph_paramsInput nodes of the sub-graph (optional)

OPENVINO_API void replace_node (const std::shared_ptr< Node > &target, const std::shared_ptr< Node > &replacement, const std::vector< int64_t > &output_order)

Replace the node target with the node replacement, i.e., redirect all users and control dependencies of target to replacement.

This is primarily used in graph-rewriting passes. For example, we might “fuse” two Concat operations as follows:

(Step 0: Original graph)

A B | | v v N0[Concat, concatenation_axis=3] C | | v v N1[Concat, concatenation_axis=3] | | v v some_user another_user

(Step 1: Construct replacement)

shared_ptr<Node> new_N1 = make_shared<op::Concat>({A,B,C},3);

A————————————-&#8212;. | | | B————-&#8212;)&#8212;. | | | | v v | | N0[Concat, concatenation_axis=3] C–&#8212;)&#8212;)&#8212;. | | | | | v v v v v N1[Concat, concatenation_axis=3] new_N1[Concat, concatenation_axis=3] | | v v some_user another_user

(Step 2: Replace N1 with new_N1)

replace_node(N1, new_N1);

A————————————-&#8212;. | | | B————-&#8212;)&#8212;. | | | | v v | | N0[Concat, concatenation_axis=3] C–&#8212;)&#8212;)&#8212;. | | | | | v v v v v N1[Concat, concatenation_axis=3] new_N1[Concat, concatenation_axis=3] | | v v some_user another_user

(Step 3: N0 and N1 are now dead, nodes will be freed)

[happens automatically, once all shared_ptrs to N1 are released]

A————————————-&#8212;. | B————-&#8212;)&#8212;. | | | | C–&#8212;)&#8212;)&#8212;. | | | v v v new_N1[Concat, concatenation_axis=3] | | v v some_user another_user

NOTE 1: replace_node is not type-safe (the graph is not revalidated). For example, the following is allowed, even if node some_user requires an input of shape 2x2:

(Before) A(shape=2x2) B(shape=3x3) | v some_user(requires 2x2 input)

(After &#8212; graph is now invalid)

 replace_node(A, B);

 A(shape=2x2)  B(shape=3x3)
               |
               v
            some_user(requires 2x2 input)
NOTE 2: it is possible to insert a cycle into the graph with replace_node, resulting in an invalid graph. Care must be taken to avoid this. One common example is when you are attempting to insert a new node M “after”a nodeN`. For example, you might expect this to work:

shared_ptr<Node> M = make_shared<SomeUnaryOp>(N); replace_node(M, N);

The problem is that at replacement time, N itself is a user of M. So we end up introducing a cycle as follows:

  N
  |
  v
other users…
|||
vvv

 N------------>M
 |
 v
other users…
|||
vvv

         .----.
        |      |
        |      |
 N      `----->M
               |
               v
          other users...
To avoid the cycle, a valid way to perform the above desired insertion would be,
   auto new_N = N->clone_with_new_inputs(N->input_values());
   shared_ptr<Node> M = make_shared<SomeUnaryOp>(new_N);
   replace_node(N, M);

Parameters
  • targetNode to be replaced.

  • replacementNode to replace target with.

  • output_order – Vector determines order of replacement node’s outputs.

OPENVINO_API void replace_node (const std::shared_ptr< Node > &target, const OutputVector &replacement_values)

Replace target.outputs[i] with replacement_values[i] and transfer control dependents and.

OPENVINO_API void replace_node (const std::shared_ptr< Node > &target, const std::shared_ptr< Node > &replacement)
OPENVINO_API void replace_nodes (const std::shared_ptr< Model > &f, const std::unordered_map< std::shared_ptr< op::v0::Parameter >, std::shared_ptr< op::v0::Parameter > > &parameter_replacement_map, const std::unordered_map< std::shared_ptr< Node >, std::shared_ptr< Node > > &body_replacement_map)

Replace multiple nodes in a function.

Limitations:

  • No check is made that the replaced nodes in parameter_replacement_map are actually among the bound parameters of f. (If a parameter appears in the map that is not bound by f, it will be silently ignored.)

  • If a parameter node appears as a key in both parameter_replacement_map and in body_replacement_map, behavior is unspecified.

Parameters
  • fModel where replacement is taking place.

  • parameter_replacement_map – A mapping from parameter shared pointers to parameter shared pointers. For each pair (k,v) in the map, parameter k is replaced by parameter v, except if k==v or k is not a parameter bound by f, in which case the pair (k,v) is ignored.

  • body_replacement_map – A mapping from node shared pointers to node shared pointers. For each pair (k,v) in the map, node k is replaced by node v, except if k==v, the pair (k,v) is ignored. Note that if k is a parameter, its users will be redirected to v, but k will not be replaced in the function’s parameter list.

template<typename T>
std::vector<std::shared_ptr<Node>> topological_sort(T root_nodes)

Topological sort of nodes needed to compute root_nodes.

OPENVINO_API std::shared_ptr< ov::Model > clone_model (const ov::Model &model, std::unordered_map< Node *, std::shared_ptr< Node > > &node_map)
OPENVINO_API std::shared_ptr< ov::Model > clone_model (const ov::Model &model)

input model is cloned and returned

OPENVINO_API bool compare_constants (const std::shared_ptr< Node > &n1, const std::shared_ptr< Node > &n2)
OPENVINO_API bool replace_output_update_name (Output< Node > node, const Output< Node > &node_input)
OPENVINO_API bool replace_node_update_name (const std::shared_ptr< Node > &target, const std::shared_ptr< Node > &replacement)
OPENVINO_API void serialize (const std::shared_ptr< const ov::Model > &m, const std::string &xml_path, const std::string &bin_path="", ov::pass::Serialize::Version version=ov::pass::Serialize::Version::UNSPECIFIED)

Serialize given model into IR. The generated .xml and .bin files will be saved into provided paths. This method serializes model “as-is” that means no weights compression and other possible transformations are applied. It is recommended to use ov::save_model function instead of ov::serialize, because it is aligned with default model conversion flow.

Parameters
  • mModel which will be converted to IR representation.

  • xml_path – Path where .xml file will be saved.

  • bin_path – Path where .bin file will be saved (optional). The same name as for xml_path will be used by default.

  • versionVersion of the generated IR (optional).

OPENVINO_API void save_model (const std::shared_ptr< const ov::Model > &model, const std::string &output_model, bool compress_to_fp16=true)

Save given model into IR. Floating point weights are compressed to FP16 by default. This method saves a model to IR applying all necessary transformations that usually applied in model conversion flow provided by mo tool. Paricularly, floatting point weights are compressed to FP16.

Parameters
  • modelModel which will be converted to IR representation.

  • output_model – Path to the output model file, must have extension .xml

  • compress_to_fp16 – Whether to compress floatting point weights to FP16 (true by default)

OPENVINO_API std::ostream & operator<< (std::ostream &str, const Interval &interval)
std::shared_ptr<Model> clone_ov_model(const Model &func, std::unordered_map<Node*, std::shared_ptr<Node>> &node_map)
OPENVINO_API std::ostream & operator<< (std::ostream &, const Model &)
OPENVINO_API ov::Dimension get_batch (const std::shared_ptr< const ov::Model > &f)

Helper method to get associated batch size for a Model.

Checks layout of each parameter in a Model and extracts value for N (B) dimension. All values are then merged and returned

Throws

ov::AssertFailure – with details in case of error. Possible errors are:

  • There is no parameter with layout set. Model shall have at least one parameter with layout with ‘N’ dimension. Recommended fix is to use Parameter::set_layout API, e.g. model->get_parameters()[some_index]->set_layout("NCHW");

  • Several parameters have conflicting N dimension, e.g. param1 NCHW{1,3,224,224} and param2 NCHW{2,3,224,224}. This is ambiguous, most probably first dimension is incorrectly marked as ‘batch’ (N) in some layout. User shall fix it before using of ‘get_batch’ (in example above correct layout for param2 from ‘NCHW’ to ‘CHWN’)

Parameters

fModel where to look for a batch_size value

Returns

Dimension representing current batch size. Can represent a number or be a dynamic

OPENVINO_API void set_batch (const std::shared_ptr< ov::Model > &model, ov::Dimension batch_size)

Helper method to set batch size to a Model.

Checks layout of each parameter in a Model and sets value for N (B) dimension. Then performs validation and type propagation

Throws

ov::AssertFailure – with details in case of error. Possible errors are:

  • There is no parameter with N dimension in layout. Model shall have at least one parameter with layout with ‘N’ dimension. Recommended fix is to use Parameter::set_layout API, e.g. model->get_parameters()[some_index]->set_layout("NCHW");

  • Several parameters have conflicting N dimension, e.g. param1 NCHW{1,3,224,224} and param2 NCHW{3,224,224,1}. This is ambiguous (1 != 3), most probably some dimension is incorrectly marked as ‘batch’ (N) in some layout. User shall fix it before using of ‘set_batch’ (in example above correct layout for param2 from ‘NCHW’ to ‘CHWN’)

  • Validation fails after setting batch_size. Model becomes in inconsistent state after new batch size value is applied. Possible reason could be that layout was not set for some parameters, or batch size can’t be applied to model at all

Parameters
  • model – model where to set batch_size value

  • batch_size – Batch size value. For dynamic batch size, Dimension::dynamic() can be passed.

OPENVINO_API std::string node_validation_failure_loc_string (const Node *node)
OPENVINO_API std::ostream & operator<< (std::ostream &, const Node &)
OPENVINO_API std::ostream & operator<< (std::ostream &, const Node *)
void OPENVINO_API check_new_args_count (const Node *const node, const OutputVector &new_args)

Check new arguments size if match node inputs count.

This check is required in cloning ov::Node.

Parameters
  • node – Pointer to node.

  • new_args – Vector with new outputs to check.

OPENVINO_API std::ostream & operator<< (std::ostream &out, const Input< Node > &input)
OPENVINO_API std::ostream & operator<< (std::ostream &out, const Input< const Node > &input)
OPENVINO_API std::ostream & operator<< (std::ostream &out, const Output< Node > &output)
OPENVINO_API std::ostream & operator<< (std::ostream &out, const Output< const Node > &output)
OPENVINO_API OutputVector as_output_vector (const NodeVector &args)
OPENVINO_API NodeVector as_node_vector (const OutputVector &values)
OPENVINO_API ResultVector as_result_vector (const OutputVector &values)

Returns a ResultVector referencing values.

OPENVINO_API PartialShape operator+ (const PartialShape &s1, const PartialShape &s2)

Elementwise addition of two PartialShape objects.

  • If s1 or s2 has dynamic rank, returns PartialShape::dynamic().

  • If s1 ands2` both have static rank, and their ranks are unequal, throws std::invalid_argument.

  • If s1 and s2 both have static rank, and their ranks are equal, returns a new shape whose ith dimension is s1[i] + s2[i].

Parameters
  • s1 – Left operand for addition.

  • s2 – Right operand for addition.

Throws

std::invalid_argument – If s1 and s2 have inconsistent ranks.

Returns

The result of elementwise adding s1 to s2 (see description).

OPENVINO_API std::ostream & operator<< (std::ostream &str, const PartialShape &shape)

Inserts a human-readable representation of a PartialShape into an output stream.

The output to the stream is in “informal” notation. In other words:

  • If shape has dynamic rank, inserts the string ?.

  • If shape has static rank, inserts the string {, then inserts each dimension of shape into the output stream separated by commas, then inserts }.

Example:

PartialShape s1{PartialShape::dynamic())};
PartialShape s2{};
PartialShape s3{1,Dimension::dynamic(),2,3};
PartialShape s4{2,3,4};
std::cout << s1 << std::endl
          << s2 << std::endl
          << s3 << std::endl
          << s4 << std::endl;

Output:

?
{}
{1,?,2,3}
{2,3,4}
Parameters
  • str – The output stream targeted for insertion.

  • shape – The shape to be inserted into str.

Returns

A reference to str after insertion.

OPENVINO_API void copy_runtime_info (const std::shared_ptr< ov::Node > &from, const std::shared_ptr< ov::Node > &to)
OPENVINO_API void copy_runtime_info (const std::shared_ptr< ov::Node > &from, ov::NodeVector to)
OPENVINO_API void copy_runtime_info (const ov::NodeVector &from, const std::shared_ptr< ov::Node > &to)
OPENVINO_API void copy_runtime_info (const ov::NodeVector &from, ov::NodeVector to)
OPENVINO_API void copy_output_runtime_info (const ov::OutputVector &from, ov::OutputVector to)
OPENVINO_API std::ostream & operator<< (std::ostream &os, const RuntimeAttribute &attrubute)
template<typename ForwardIt>
size_t shape_size(ForwardIt start_dim, const ForwardIt end_dim)

Number of elements in a subset of dimensions of a shape. Returns a product of dimensions in a range [start_dim;end_dim)

template<typename SHAPE_TYPE>
size_t shape_size(const SHAPE_TYPE &shape)

Number of elements in spanned by a shape.

template<typename SHAPE_TYPE>
std::vector<size_t> row_major_strides(const SHAPE_TYPE &shape)

Row-major strides for a shape.

template<typename SHAPE_TYPE>
size_t row_major_stride(const SHAPE_TYPE &shape, size_t axis)
template<typename SHAPE_TYPE>
inline bool is_scalar(const SHAPE_TYPE &shape)
template<typename SHAPE_TYPE>
inline bool is_vector(const SHAPE_TYPE &shape)
OPENVINO_API std::ostream & operator<< (std::ostream &s, const Shape &shape)
OPENVINO_API std::ostream & operator<< (std::ostream &s, const Strides &strides)
OPENVINO_API std::ostream & operator<< (std::ostream &s, const DiscreteTypeInfo &info)
bool ::type is_type (Value value)
Type *::type as_type (Value value)
template<typename T, typename U>
auto as_type_ptr(const U &value) -> decltype(::ov::util::AsTypePtr<U>::template call<T>(value))

Casts a std::shared_ptr<Value> to a std::shared_ptr<Type> if it is of type Type, nullptr otherwise

OPENVINO_API PartialShape infer_convolution_forward (const Node *node, const PartialShape &data_batch_shape, const Strides &data_dilation, const CoordinateDiff &data_padding_below, const CoordinateDiff &data_padding_above, const PartialShape &filters_shape, const Strides &filter_strides, const Strides &filter_dilation)
OPENVINO_API void infer_auto_padding (const Shape &image_shape, const Shape &filter_shape, const Strides &filter_strides, const Strides &filter_dilations, const op::PadType pad_type, CoordinateDiff &padding_above, CoordinateDiff &padding_below)
OPENVINO_API int64_t normalize_axis (const Node *node, std::int64_t axis, const Rank &tensor_rank)

Handle out of range axis.

Parameters
  • node[in] The node with requested axis.

  • axis[in] The requested axis value.

  • tensor_rank[in] The corresponding tensor rank.

Returns

Checking if axis is in range [-tensor_rank, tensor_rank-1], otherwise returns error. If negative axis, it counts from the last to the first axis, by adding tensor_rank to axis.

OPENVINO_API std::vector< size_t > normalize_axes (const std::string &node_description, const std::vector< int64_t > &axes, const Rank &tensor_rank)

Handle out of range axes in vector.

Parameters
  • node_description[in] The name of node with requested axes.

  • axes[in] The requested vector of axes.

  • tensor_rank[in] The corresponding tensor rank.

Returns

If any negative axis in vector, it counts from the last to the first axis, by adding tensor_rank to axis.

OPENVINO_API int64_t normalize_axis (const std::string &node_description, std::int64_t axis, const Rank &tensor_rank)

Handle out of range axis.

Parameters
  • node_description[in] The node with requested axis.

  • axis[in] The requested axis value.

  • tensor_rank[in] The corresponding tensor rank.

Returns

Checking if axis is in range [-tensor_rank, tensor_rank-1], otherwise returns error. If negative axis, it counts from the last to the first axis, by adding tensor_rank to axis.

OPENVINO_API int64_t normalize_axis (const Node *node, std::int64_t axis, std::uint64_t tensor_rank, std::int64_t axis_range_min, std::int64_t axis_range_max)

Handle out of range axis.

Parameters
  • node[in] The node with requested axis.

  • axis[in] The requested axis value.

  • tensor_rank[in] The corresponding tensor rank.

  • axis_range_min[in] The min value of accepted range for axis.

  • axis_range_max[in] The max value of accepted range for axis.

Returns

Checking if axis is in range [axis_range_min, axis_range_max], otherwise returns error. If negative axis, it counts from the last to the first axis, by adding tensor_rank to axis.

OPENVINO_API int64_t normalize_axis (const std::string &node_description, std::int64_t axis, std::uint64_t tensor_rank, std::int64_t axis_range_min, std::int64_t axis_range_max)

Handle out of range axis.

Parameters
  • node_description[in] The name of node with requested axis.

  • axis[in] The requested axis value.

  • tensor_rank[in] The corresponding tensor rank.

  • axis_range_min[in] The min value of accepted range for axis.

  • axis_range_max[in] The max value of accepted range for axis.

Returns

Checking if axis is in range [axis_range_min, axis_range_max], otherwise returns error. If negative axis, it counts from the last to the first axis, by adding tensor_rank to axis.

OPENVINO_API void normalize_axes (const Node *node, const int64_t &tensor_rank, std::vector< int64_t > &axes)

Handle out of range axes in vector. If any negative axis in vector, it counts from the last to the first axis, by adding tensor_rank to axis. Changes axes vector inplace.

Parameters
  • node[in] The node with requested axes.

  • tensor_rank[in] The corresponding tensor rank.

  • axes[inout] The requested vector of axes.

OPENVINO_API bool evaluate_as_partial_shape (const Output< Node > &output, PartialShape &pshape)

Evaluates lower and upper value estimations for the output tensor. Estimation would be represented as partial shape object using Dimension(min, max) for each element.

Parameters
  • outputNode output pointing to the tensor for estimation.

  • pshape – Resulting estimation would be stored in this PartialShape.

Returns

boolean status if value evaluation was successful.

OPENVINO_API std::shared_ptr< op::v0::Constant > get_constant_from_source (const Output< Node > &source)

Runs an estimation of source tensor. If it succeeded to calculate both bounds and they are the same returns Constant operation from the resulting bound, otherwise nullptr.

OPENVINO_API bool default_label_evaluator (const Node *node, TensorLabelVector &output_labels)

Propagates value label from 0 input to the only output through an operation. Not applicable for operations which require values interaction (example: mathematical operations). Could be used for movement operations (example: gathering, shape change)

Parameters
  • node – Operation to be performed

  • output_labels – Vector of TensorLabel objects representing resulting value labels

Returns

boolean status if label evaluation was successful.

OPENVINO_API void generate_transpose_default_order (std::vector< int64_t > &axes_order, const size_t length)

Generates transpose default axes order at end of input vector.

Default axes order is decreasing sequence numbers which start from length - 1.

Parameters
  • axes_order – Vector where default order will be generated.

  • length – Sequence length of axes order.

OPENVINO_API bool is_valid_axes_order (const std::vector< int64_t > &axes_order, const size_t size)

Check if vector of axes order has got valid values.

Axes order has to be unique numbers in range of [0, size).

Parameters
  • axes_order – Vector with axes order to check.

  • sizeInput for transpose rank size.

Returns

true if axes order is valid otherwise false.

OPENVINO_API bool has_no_labels (const TensorLabel &labels)

Checks label tensor if there is no label.

Parameters

labels – Label tensor for check.

Returns

True if there is no labels, otherwise false.

OPENVINO_API std::ostream & operator<< (std::ostream &s, const Version &version)
OPENVINO_API std::ostream & operator<< (std::ostream &s, const std::map< std::string, Version > &versions)
OPENVINO_API_C (const Version) get_openvino_version() noexcept

Gets the current OpenVINO version.

Returns

The current OpenVINO version

OPENVINO_API std::ostream & operator<< (std::ostream &s, const op::v1::BinaryConvolution::BinaryConvolutionMode &type)
OPENVINO_API std::ostream & operator<< (std::ostream &s, const op::v0::DepthToSpace::DepthToSpaceMode &type)
OPENVINO_API std::ostream & operator<< (std::ostream &s, const op::v9::GridSample::InterpolationMode &type)
OPENVINO_API std::ostream & operator<< (std::ostream &s, const op::v9::GridSample::PaddingMode &type)
OPENVINO_API std::ostream & operator<< (std::ostream &s, const op::v0::Interpolate::InterpolateMode &type)
OPENVINO_API std::ostream & operator<< (std::ostream &s, const op::LSTMWeightsFormat &type)
OPENVINO_API std::ostream & operator<< (std::ostream &s, const op::v8::MatrixNms::DecayFunction &type)
OPENVINO_API std::ostream & operator<< (std::ostream &s, const op::v8::MatrixNms::SortResultType &type)
OPENVINO_API std::ostream & operator<< (std::ostream &s, const op::v1::NonMaxSuppression::BoxEncodingType &type)
OPENVINO_API std::ostream & operator<< (std::ostream &s, const op::v3::NonMaxSuppression::BoxEncodingType &type)
OPENVINO_API std::ostream & operator<< (std::ostream &s, const op::v5::NonMaxSuppression::BoxEncodingType &type)
OPENVINO_API std::ostream & operator<< (std::ostream &s, const op::v9::NonMaxSuppression::BoxEncodingType &type)
OPENVINO_API std::ostream & operator<< (std::ostream &s, const op::v1::Reverse::Mode &type)
std::ostream &operator<<(std::ostream &s, const op::v3::ROIAlign::PoolingMode &mode)
std::ostream &operator<<(std::ostream &s, const op::v9::ROIAlign::PoolingMode &mode)
std::ostream &operator<<(std::ostream &s, const op::v9::ROIAlign::AlignedMode &mode)
OPENVINO_API std::ostream & operator<< (std::ostream &s, const op::v5::Round::RoundMode &type)
OPENVINO_API std::ostream & operator<< (std::ostream &s, const op::v0::SpaceToDepth::SpaceToDepthMode &type)
OPENVINO_API std::ostream & operator<< (std::ostream &s, const op::util::InterpolateBase::InterpolateMode &type)
OPENVINO_API std::ostream & operator<< (std::ostream &s, const op::util::InterpolateBase::CoordinateTransformMode &type)
OPENVINO_API std::ostream & operator<< (std::ostream &s, const op::util::InterpolateBase::NearestMode &type)
OPENVINO_API std::ostream & operator<< (std::ostream &s, const op::util::InterpolateBase::ShapeCalcMode &type)
OPENVINO_API std::ostream & operator<< (std::ostream &s, const op::util::MulticlassNmsBase::SortResultType &type)
void OPENVINO_API mark_as_precision_sensitive (ov::Input< ov::Node > node_input)
void OPENVINO_API unmark_as_precision_sensitive (ov::Input< ov::Node > node_input)
bool OPENVINO_API is_precision_sensitive (const ov::Input< ov::Node > &node_input)
OPENVINO_API void set_up_symbolic_info (const std::shared_ptr< ov::Model > &model, const std::shared_ptr< ov::TableOfEquivalence > &table)
OPENVINO_API void set_up_symbolic_info (const ov::Output< ov::Node > &output, const std::shared_ptr< ov::TableOfEquivalence > &table)
OPENVINO_API void populate_tensor_with_missing_labels (ov::descriptor::Tensor &tensor)
OPENVINO_API bool skip_invalidation (const ov::descriptor::Tensor &tensor)
OPENVINO_API std::shared_ptr< ov::TableOfEquivalence > table_of_equivalence (const std::shared_ptr< ov::Model > &model)
OPENVINO_API std::shared_ptr< ov::TableOfEquivalence > table_of_equivalence (const ov::descriptor::Tensor &tensor)
OPENVINO_API void remove_symbolic_info (const std::shared_ptr< ov::Model > &model, bool outermost_model=true)
const OPENVINO_API OpSet & get_opset1 ()

Returns opset1.

const OPENVINO_API OpSet & get_opset2 ()

Returns opset2.

const OPENVINO_API OpSet & get_opset3 ()

Returns opset3.

const OPENVINO_API OpSet & get_opset4 ()

Returns opset4.

const OPENVINO_API OpSet & get_opset5 ()

Returns opset5.

const OPENVINO_API OpSet & get_opset6 ()

Returns opset6.

const OPENVINO_API OpSet & get_opset7 ()

Returns opset7.

const OPENVINO_API OpSet & get_opset8 ()

Returns opset8.

const OPENVINO_API OpSet & get_opset9 ()

Returns opset9.

const OPENVINO_API OpSet & get_opset10 ()

Returns opset10.

const OPENVINO_API OpSet & get_opset11 ()

Returns opset11.

const OPENVINO_API OpSet & get_opset12 ()

Returns opset12.

const OPENVINO_API OpSet & get_opset13 ()

Returns opset13.

const OPENVINO_API std::map< std::string, std::function< const ov::OpSet &()> > & get_available_opsets ()

Returns map of available opsets.

std::size_t coordinate_index(const Coordinate &c, const Shape &s)
size_t coordinate_offset(const std::vector<size_t> &coordinate, const std::vector<size_t> &strides)

Calculate offset from begin of buffer based on coordinate and strides.

If coordinates and strides have different sizes then result is undefined behaviour.

Parameters
  • coordinate – Vector with multi-dimension coordinates.

  • strides – Vector with multi-dimension strides

Returns

Offset of element from start of buffer.

template<class T>
constexpr bool is_floating_point()

Check if T is OpenVINO floating point precision.

Returns

True if OpenVino floating point precision.

OV_ITT_DOMAIN(OV_PP_CAT(TYPE_LIST_, ov_eval))
template<class TContainer>
constexpr auto make_tensor_accessor(const TContainer &c) -> TensorAccessor<TContainer>

Makes TensorAccessor for specific tensor container.

See also

TensorAccessor for supported types.

Template Parameters

TContainer – Type of tensor containers

Parameters

c – Container of tensors.

Returns

TensorContainer for specific type.

auto make_tensor_accessor() -> const TensorAccessor<void>&

Makes empty TensorAccessor which return empty tensor for any port number.

Returns

TensorAccessor to return empty tensor.

template<class T, class TResult = std::vector<T>, class UnaryOperation>
TResult get_raw_data_as(const element::Type_t et, const void *const ptr, const size_t size, UnaryOperation &&func)

Get the raw data as TResult object.

Template Parameters
  • T – TResult data type.

  • TResult – Type of return object, must support creation of std::inserter. Default std::vector<T>.

  • UnaryOperation – Unary function object applied on data with signature (T f(const U u)).

Parameters
  • et – Element type of input data.

  • ptr – Pointer to data of type et.

  • size – Data size as number of elements.

  • func – Unary operation function object.

Throws

ov::AssertionFailure – for not supported element type.

Returns

Object of TResult with data from input pointer and transformed by unary operation.

template<class T, class TResult = std::vector<T>, class UnaryOperation = ov::util::Cast<T>>
TResult get_tensor_data_as(const Tensor &t, UnaryOperation &&func = ov::util::Cast<T>())

Get data from ov:tensor as object TResult.

Template Parameters
  • T – TResult data type.

  • TResult – Type of return object, must support creation of std::inserter. Default std::vector<T>.

  • UnaryOperation – Unary function object applied on data with signature (T f(const U u)).

Parameters
  • tInput tensor.

  • func – Unary operation function object.

Returns

Object of TResult with data from tensor.

FRONTEND_API void shutdown ()

Shut down the OpenVINO by deleting all static-duration objects allocated by the library and releasing dependent resources.

You might want to use this function if you are developing a dynamically-loaded library which should clean up all resources after itself when the library is unloaded.

Note

This function should be used by advanced user to control unload the resources.

std::unordered_set<std::string> get_supported_nodes(const std::shared_ptr<const ov::Model> &model, std::function<void(std::shared_ptr<ov::Model>&)> transform, std::function<bool(const std::shared_ptr<ov::Node>)> is_node_supported)

Returns set of nodes from original model which are determined as supported after applied transformation pipeline.

Parameters
  • model – Original model

  • transform – Transformation pipeline function

  • is_node_supported – Function returning whether node is supported or not

Returns

Set of strings which contains supported node names

std::shared_ptr<ITensor> make_tensor(const element::Type type, const Shape &shape, const Allocator &allocator = {})

Constructs Tensor using element type and shape. Allocate internal host storage using default allocator.

Parameters
  • typeTensor element type

  • shapeTensor shape

  • allocator – allocates memory for internal tensor storage

std::shared_ptr<ITensor> make_tensor(const element::Type type, const Shape &shape, void *host_ptr, const Strides &strides = {})

Constructs Tensor using element type and shape. Wraps allocated host memory.

Note

Does not perform memory allocation internally

Parameters
  • typeTensor element type

  • shapeTensor shape

  • host_ptr – Pointer to pre-allocated host memory

  • strides – Optional strides parameters in bytes. Strides are supposed to be computed automatically based on shape and element size

std::shared_ptr<ITensor> make_tensor(const std::shared_ptr<ITensor> &other, const Coordinate &begin, const Coordinate &end)

Constructs region of interest (ROI) tensor form another tensor.

Note

Does not perform memory allocation internally

Note

A Number of dimensions in begin and end must match number of dimensions in other.get_shape()

Parameters
  • other – original tensor

  • begin – start coordinate of ROI object inside of the original object.

  • end – end coordinate of ROI object inside of the original object.

ov::Tensor make_tensor(const ov::SoPtr<ITensor> &tensor)

Constructs public ov::Tensor class.

Parameters

tensorTensor implementation

Returns

OpenVINO Tensor

bool check_open_mp_env_vars(bool include_omp_num_threads = true)

Checks whether OpenMP environment variables are defined.

Parameters

include_omp_num_threads[in] Indicates if the omp number threads is included

Returns

True if any OpenMP environment variable is defined, false otherwise

std::vector<int> get_available_numa_nodes()

Returns available CPU NUMA nodes (on Linux, and Windows [only with TBB], single node is assumed on all other OSes)

Returns

NUMA nodes

std::vector<int> get_available_cores_types()

Returns available CPU cores types (on Linux, and Windows) and ONLY with TBB, single core type is assumed otherwise.

Returns

Vector of core types

int get_number_of_cpu_cores(bool big_cores_only = false)

Returns number of CPU physical cores on Linux/Windows (which is considered to be more performance friendly for servers) (on other OSes it simply relies on the original parallel API of choice, which usually uses the logical cores). call function with ‘false’ to get #phys cores of all types call function with ‘true’ to get #phys ‘Big’ cores number of ‘Little’ = ‘all’ - ‘Big’.

Parameters

big_cores_only[in] Additionally limits the number of reported cores to the ‘Big’ cores only.

Returns

Number of physical CPU cores.

int get_number_of_logical_cpu_cores(bool big_cores_only = false)

Returns number of CPU logical cores on Linux/Windows (on other OSes it simply relies on the original parallel API of choice, which uses the ‘all’ logical cores). call function with ‘false’ to get #logical cores of all types call function with ‘true’ to get #logical ‘Big’ cores number of ‘Little’ = ‘all’ - ‘Big’.

Parameters

big_cores_only[in] Additionally limits the number of reported cores to the ‘Big’ cores only.

Returns

Number of logical CPU cores.

int get_number_of_blocked_cores()

Returns number of blocked CPU cores. Please note that this is a temporary interface for performance optimization on a specific platform. May be removed in future release.

Returns

Number of blocked CPU cores.

bool with_cpu_x86_sse42()

Checks whether CPU supports SSE 4.2 capability.

Returns

True is SSE 4.2 instructions are available, false otherwise

bool with_cpu_x86_avx()

Checks whether CPU supports AVX capability.

Returns

True is AVX instructions are available, false otherwise

bool with_cpu_x86_avx2()

Checks whether CPU supports AVX2 capability.

Returns

True is AVX2 instructions are available, false otherwise

bool with_cpu_x86_avx2_vnni()

Checks whether CPU supports AVX2_VNNI capability.

Returns

True is AVX2_VNNI instructions are available, false otherwise

bool with_cpu_x86_avx512f()

Checks whether CPU supports AVX 512 capability.

Returns

True is AVX512F (foundation) instructions are available, false otherwise

bool with_cpu_x86_avx512_core()

Checks whether CPU supports AVX 512 capability.

Returns

True is AVX512F, AVX512BW, AVX512DQ instructions are available, false otherwise

bool with_cpu_x86_avx512_core_vnni()

Checks whether CPU supports AVX 512 VNNI capability.

Returns

True is AVX512F, AVX512BW, AVX512DQ, AVX512_VNNI instructions are available, false otherwise

bool with_cpu_x86_bfloat16()

Checks whether CPU supports BFloat16 capability.

Returns

True is tAVX512_BF16 instructions are available, false otherwise

bool with_cpu_x86_avx512_core_fp16()

Checks whether CPU supports fp16 capability.

Returns

True is tAVX512_FP16 instructions are available, false otherwise

bool with_cpu_x86_avx512_core_amx_int8()

Checks whether CPU supports AMX int8 capability.

Returns

True is tAMX_INT8 instructions are available, false otherwise

bool with_cpu_x86_avx512_core_amx_bf16()

Checks whether CPU supports AMX bf16 capability.

Returns

True is tAMX_BF16 instructions are available, false otherwise

bool with_cpu_x86_avx512_core_amx()

Checks whether CPU supports AMX capability.

Returns

True is tAMX_INT8 or tAMX_BF16 instructions are available, false otherwise

bool is_cpu_map_available()

Checks whether cpu_mapping Available.

Returns

True is CPU mapping is available, false otherwise

int get_num_numa_nodes()

Get number of numa nodes.

Returns

Number of numa nodes

int get_num_sockets()

Get number of sockets.

Returns

Number of sockets

std::vector<std::vector<int>> get_proc_type_table()

Returns a table of number of processor types on Linux/Windows.

  1. Processor table of one socket CPU desktop ALL_PROC | MAIN_CORE_PROC | EFFICIENT_CORE_PROC | HYPER_THREADING_PROC 32 8 16 8 // Total number of one socket

Returns

A table about number of CPU cores of different types defined with ColumnOfProcessorTypeTable The following are two example of processor type table.

  1. Processor table of two socket CPUs XEON server ALL_PROC | MAIN_CORE_PROC | EFFICIENT_CORE_PROC | HYPER_THREADING_PROC 96 48 0 48 // Total number of two sockets 48 24 0 24 // Number of socket one 48 24 0 24 // Number of socket two

std::vector<std::vector<int>> get_org_proc_type_table()

Returns a table of original number of processor types without filtering other plugins occupying CPU resources. The difference from get_proc_type_table: This is used to get the configuration of current machine. For example, GPU plugin occupies all Pcores, there is only one type core in proc_type_table from get_proc_type_table(). If user wants to get the real configuration of this machine which should be got from get_org_proc_type_table.

Returns

A table about number of CPU cores of different types defined with ColumnOfProcessorTypeTable

void reserve_available_cpus(const std::vector<std::vector<int>> streams_info_table, std::vector<std::vector<int>> &stream_processors, const int cpu_status = NOT_USED)

Get and reserve available cpu ids.

Parameters
  • streams_info_table[in] streams information table.

  • stream_processors[in] processors grouped in stream which is used in core binding in cpu streams executor

  • cpu_status[in] set cpu status

void set_cpu_used(const std::vector<int> &cpu_ids, const int used)

Set CPU_MAP_USED_FLAG of cpu_mapping.

Parameters
  • cpu_ids[in] cpus in cpu_mapping.

  • used[in] update CPU_MAP_USED_FLAG of cpu_mapping with this flag bit

int get_socket_by_numa_node(int numa_node_id)

Get socket id by current numa node id.

Parameters

numa_node_id[in] numa node id

Returns

socket id

int get_org_socket_id(int socket_id)

Get original socket id by current socket id, the input socket id is recalculated after filtering (like numactl), while the original socket id is the original id before filtering.

Parameters

socket_id[in] socket id

Returns

socket id

int get_org_numa_id(int numa_node_id)

Get original numa node id by current numa node id, the input numa node id is recalculated after filtering (like numactl), while the original numa node id is the original id before filtering.

Parameters

numa_node_id[in] numa node id

Returns

numa node id

static MemBandwidthPressure MemBandwidthPressureTolerance(const std::shared_ptr<ov::Model> model, const float cache_size, const float memThresholdAssumeLimited = MemBandwidthPressure::LIMITED)
class Allocator
#include <allocator.hpp>

Wraps allocator implementation to provide safe way to store allocater loaded from shared library And constructs default based on new delete c++ calls allocator if created without parameters Accepts any std::pmr::memory_resource like allocator.

Public Functions

~Allocator()

Destructor preserves unloading order of implementation object and reference to library.

Allocator()

Default constructor.

Allocator(const Allocator &other) = default

Default copy constructor.

Parameters

other – other Allocator object

Allocator &operator=(const Allocator &other) = default

Default copy assignment operator.

Parameters

other – other Allocator object

Returns

reference to the current object

Allocator(Allocator &&other) = default

Default move constructor.

Parameters

other – other Allocator object

Allocator &operator=(Allocator &&other) = default

Default move assignment operator.

Parameters

other – other Allocator object

Returns

reference to the current object

OPENVINO_SUPPRESS_DEPRECATED_START Allocator(const AllocatorImpl::Ptr &impl)

Constructs Allocator from the initialized std::shared_ptr.

Parameters

impl – Initialized shared pointer

template<typename A, typename std::enable_if<!std::is_convertible<A, AllocatorImpl::Ptr>::value && !std::is_same<typename std::decay<A>::type, Allocator>::value && !std::is_abstract<typename std::decay<A>::type>::value && !std::is_convertible<typename std::decay<A>::type, std::shared_ptr<Base>>::value, bool>::type = true>
inline Allocator(A &&a)

Initialize allocator using any allocator like object.

Template Parameters

A – Type of allocator

Parameters

a – allocator object

OPENVINO_SUPPRESS_DEPRECATED_END void * allocate (const size_t bytes, const size_t alignment=alignof(max_align_t))

Allocates memory.

Parameters
  • bytes – The size in bytes at least to allocate

  • alignment – The alignment of storage

Throws

Exception – if specified size and alignment is not supported

Returns

Handle to the allocated resource

void deallocate(void *ptr, const size_t bytes = 0, const size_t alignment = alignof(max_align_t))

Releases the handle and all associated memory resources which invalidates the handle.

Parameters
  • ptr – The handle to free

  • bytes – The size in bytes that was passed into allocate() method

  • alignment – The alignment of storage that was passed into allocate() method

bool operator==(const Allocator &other) const

Compares with other Allocator.

Parameters

other – Other instance of allocator

Returns

true if and only if memory allocated from one Allocator can be deallocated from the other and vice versa

bool operator!() const noexcept

Checks if current Allocator object is not initialized.

Returns

true if current Allocator object is not initialized, false - otherwise

explicit operator bool() const noexcept

Checks if current Allocator object is initialized.

Returns

true if current Allocator object is initialized, false - otherwise

interface AllocatorImpl : public std::enable_shared_from_this<AllocatorImpl>
#include <allocator.hpp>

Tries to act like std::pmr::memory_resource

Deprecated:

This class will be removed in 2024.0 release

Public Types

using Ptr = std::shared_ptr<AllocatorImpl>

A smart pointer containing AllocatorImpl object.

Public Functions

virtual void *allocate(const size_t bytes, const size_t alignment = alignof(max_align_t)) = 0

Allocates memory.

Parameters
  • bytes – The size in bytes at least to allocate

  • alignment – The alignment of storage

Throws

Exception – if specified size and alignment is not supported

Returns

Handle to the allocated resource

virtual void deallocate(void *handle, const size_t bytes, size_t alignment = alignof(max_align_t)) = 0

Releases the handle and all associated memory resources which invalidates the handle.

Parameters
  • handle – The handle to free

  • bytes – The size in bytes that was passed into allocate() method

  • alignment – The alignment of storage that was passed into allocate() method

virtual bool is_equal(const AllocatorImpl &other) const = 0

Compares with other AllocatorImpl.

Parameters

other – Other instance of allocator

Returns

true if and only if memory allocated from one AllocatorImpl can be deallocated from the other and vice versa

class Any
#include <any.hpp>

This class represents an object to work with different types.

Public Functions

Any() = default

Default constructor.

Any(const Any &other)

Сopy constructor.

Parameters

other – other Any object

Any &operator=(const Any &other)

Сopy assignment operator.

Parameters

other – other Any object

Returns

reference to the current object

Any(Any &&other) = default

Default move constructor.

Parameters

other – other Any object

Any &operator=(Any &&other) = default

Default move assignment operator.

Parameters

other – other