Class ov::Tensor#

class Tensor#

Tensor API holding host memory It can throw exceptions safely for the application, where it is properly handled.

Subclassed by ov::RemoteTensor

Public Functions

Tensor() = default#

Default constructor.

Tensor(const Tensor &other, const std::shared_ptr<void> &so)#

Copy constructor with adding new shared object.

Parameters:
  • other – Original tensor

  • so – Shared object

Tensor(const Tensor &other) = default#

Default copy constructor.

Parameters:

other – other Tensor object

Tensor &operator=(const Tensor &other) = default#

Default copy assignment operator.

Parameters:

other – other Tensor object

Returns:

reference to the current object

Tensor(Tensor &&other) = default#

Default move constructor.

Parameters:

other – other Tensor object

Tensor &operator=(Tensor &&other) = default#

Default move assignment operator.

Parameters:

other – other Tensor object

Returns:

reference to the current object

~Tensor()#

Destructor preserves unloading order of implementation object and reference to library.

Tensor(const element::Type &type, const Shape &shape, const Allocator &allocator = {})#

Constructs Tensor using element type and shape. Allocate internal host storage using default allocator.

Parameters:
  • typeTensor element type

  • shapeTensor shape

  • allocator – allocates memory for internal tensor storage

Tensor(const element::Type &type, const Shape &shape, void *host_ptr, const Strides &strides = {})#

Constructs Tensor using element type and shape. Wraps allocated host memory.

Note

Does not perform memory allocation internally

Parameters:
  • typeTensor element type

  • shapeTensor shape

  • host_ptr – Pointer to pre-allocated host memory with initialized objects

  • strides – Optional strides parameters in bytes. Strides are supposed to be computed automatically based on shape and element size

Tensor(const ov::Output<const ov::Node> &port, const Allocator &allocator = {})#

Constructs Tensor using port from node. Allocate internal host storage using default allocator.

Parameters:
  • port – port from node

  • allocator – allocates memory for internal tensor storage

Tensor(const ov::Output<const ov::Node> &port, void *host_ptr, const Strides &strides = {})#

Constructs Tensor using port from node. Wraps allocated host memory.

Note

Does not perform memory allocation internally

Parameters:
  • port – port from node

  • host_ptr – Pointer to pre-allocated host memory with initialized objects

  • strides – Optional strides parameters in bytes. Strides are supposed to be computed automatically based on shape and element size

Tensor(const Tensor &other, const Coordinate &begin, const Coordinate &end)#

Constructs region of interest (ROI) tensor form another tensor.

Note

Does not perform memory allocation internally

Note

A Number of dimensions in begin and end must match number of dimensions in other.get_shape()

Parameters:
  • other – original tensor

  • begin – start coordinate of ROI object inside of the original object.

  • end – end coordinate of ROI object inside of the original object.

void set_shape(const ov::Shape &shape)#

Set new shape for tensor, deallocate/allocate if new total size is bigger than previous one.

Note

Memory allocation may happen

Parameters:

shape – A new shape

const element::Type &get_element_type() const#
Returns:

A tensor element type

const Shape &get_shape() const#
Returns:

A tensor shape

void copy_to(ov::Tensor dst) const#

Copy tensor, destination tensor should have the same element type and shape.

Parameters:

dst – destination tensor

bool is_continuous() const#

Reports whether the tensor is continuous or not.

Returns:

true if tensor is continuous

size_t get_size() const#

Returns the total number of elements (a product of all the dims or 1 for scalar)

Returns:

The total number of elements

size_t get_byte_size() const#

Returns the size of the current Tensor in bytes.

Returns:

Tensor’s size in bytes

Strides get_strides() const#
Returns:

Tensor’s strides in bytes

void *data(const element::Type &type = {}) const#

Provides an access to the underlaying host memory.

Note

If type parameter is specified, the method throws an exception if specified type’s fundamental type does not match with tensor element type’s fundamental type

Parameters:

type – Optional type parameter.

Returns:

A host pointer to tensor memory

template<typename T, typename datatype = typename std::decay<T>::type>
inline T *data() const#

Provides an access to the underlaying host memory casted to type T

Note

Throws exception if specified type does not match with tensor element type

Returns:

A host pointer to tensor memory casted to specified type T.

bool operator!() const noexcept#

Checks if current Tensor object is not initialized.

Returns:

true if current Tensor object is not initialized, false - otherwise

explicit operator bool() const noexcept#

Checks if current Tensor object is initialized.

Returns:

true if current Tensor object is initialized, false - otherwise

template<typename T>
inline std::enable_if<std::is_base_of<Tensor, T>::value, bool>::type is() const noexcept#

Checks if the Tensor object can be cast to the type T.

Template Parameters:

T – Type to be checked. Must represent a class derived from the Tensor

Returns:

true if this object can be dynamically cast to the type const T*. Otherwise, false

template<typename T>
inline const std::enable_if<std::is_base_of<Tensor, T>::value, T>::type as() const#

Casts this Tensor object to the type T.

Template Parameters:

T – Type to cast to. Must represent a class derived from the Tensor

Returns:

T object

Public Static Functions

static void type_check(const Tensor &tensor)#

Checks openvino tensor type.

Parameters:

tensor – a tensor which type will be checked

Throws:

Exception – if type check with specified tensor is not pass