Group Inference Request common classes#
- group ov_dev_api_infer_request_api
A set of base and helper classes to implement a common inference request functionality.
-
class IInferRequest#
- #include <iinfer_request.hpp>
An internal API of inference request to be implemented by plugin.
Subclassed by ov::IAsyncInferRequest, ov::ISyncInferRequest
Public Functions
-
virtual void infer() = 0#
Infers specified input(s) in synchronous mode.
Note
blocks all method of InferRequest while request is ongoing (running or waiting in queue)
-
virtual std::vector<ov::ProfilingInfo> get_profiling_info() const = 0#
Queries performance measures per layer to identify the most time consuming operation.
Note
Not all plugins provide meaningful data.
- Returns:
Vector of profiling information for operations in a model.
-
virtual ov::SoPtr<ov::ITensor> get_tensor(const ov::Output<const ov::Node> &port) const = 0#
Gets an input/output tensor for inference.
Note
If the tensor with the specified
port
is not found, an exception is thrown.- Parameters:
port – Port of the tensor to get.
- Returns:
Tensor for the port
port
.
-
virtual void set_tensor(const ov::Output<const ov::Node> &port, const ov::SoPtr<ov::ITensor> &tensor) = 0#
Sets an input/output tensor to infer.
- Parameters:
port – Port of the input or output tensor.
tensor – Reference to a tensor. The element_type and shape of a tensor must match the model’s input/output element_type and size.
-
virtual std::vector<ov::SoPtr<ov::ITensor>> get_tensors(const ov::Output<const ov::Node> &port) const = 0#
Gets a batch of tensors for input data to infer by input port. Model input must have batch dimension, and the number of
tensors
must match the batch size. The current version supports setting tensors to model inputs only. Ifport
is associated with output (or any other non-input node), an exception is thrown.- Parameters:
port – Port of the input tensor.
tensors – Input tensors for batched infer request. The type of each tensor must match the model input element type and shape (except batch dimension). Total size of tensors must match the input size.
- Returns:
vector of tensors
-
virtual void set_tensors(const ov::Output<const ov::Node> &port, const std::vector<ov::SoPtr<ov::ITensor>> &tensors) = 0#
Sets a batch of tensors for input data to infer by input port. Model input must have batch dimension, and the number of
tensors
must match the batch size. The current version supports setting tensors to model inputs only. Ifport
is associated with output (or any other non-input node), an exception is thrown.- Parameters:
port – Port of the input tensor.
tensors – Input tensors for batched infer request. The type of each tensor must match the model input element type and shape (except batch dimension). Total size of tensors must match the input size.
-
virtual std::vector<ov::SoPtr<ov::IVariableState>> query_state() const = 0#
Gets state control interface for the given infer request.
State control essential for recurrent models.
- Returns:
Vector of Variable State objects.
-
virtual const std::shared_ptr<const ov::ICompiledModel> &get_compiled_model() const = 0#
Gets pointer to compiled model (usually synchronous request holds the compiled model)
- Returns:
Pointer to the compiled model
-
virtual void infer() = 0#
-
class IInferRequest#