Group Inference Request base classes¶
- group ov_dev_api_sync_infer_request_api
A set of base and helper classes to implement a syncrhonous inference request class.
-
class ISyncInferRequest : public ov::IInferRequest
- #include <isync_infer_request.hpp>
Interface for syncronous infer request.
Public Functions
Constructs syncronous inference request.
- Parameters
compiled_model – pointer to compiled model
-
virtual ov::SoPtr<ov::ITensor> get_tensor(const ov::Output<const ov::Node> &port) const override
Gets an input/output tensor for inference.
Note
If the tensor with the specified
port
is not found, an exception is thrown.- Parameters
port – Port of the tensor to get.
- Returns
Tensor for the port
port
.
-
virtual void set_tensor(const ov::Output<const ov::Node> &port, const ov::SoPtr<ov::ITensor> &tensor) override
Sets an input/output tensor to infer.
- Parameters
port – Port of the input or output tensor.
tensor – Reference to a tensor. The element_type and shape of a tensor must match the model’s input/output element_type and size.
-
virtual std::vector<ov::SoPtr<ov::ITensor>> get_tensors(const ov::Output<const ov::Node> &port) const override
Gets a batch of tensors for input data to infer by input port. Model input must have batch dimension, and the number of
tensors
must match the batch size. The current version supports setting tensors to model inputs only. Ifport
is associated with output (or any other non-input node), an exception is thrown.- Parameters
port – Port of the input tensor.
tensors – Input tensors for batched infer request. The type of each tensor must match the model input element type and shape (except batch dimension). Total size of tensors must match the input size.
- Returns
vector of tensors
-
virtual void set_tensors(const ov::Output<const ov::Node> &port, const std::vector<ov::SoPtr<ov::ITensor>> &tensors) override
Sets a batch of tensors for input data to infer by input port. Model input must have batch dimension, and the number of
tensors
must match the batch size. The current version supports setting tensors to model inputs only. Ifport
is associated with output (or any other non-input node), an exception is thrown.- Parameters
port – Port of the input tensor.
tensors – Input tensors for batched infer request. The type of each tensor must match the model input element type and shape (except batch dimension). Total size of tensors must match the input size.
-
virtual const std::vector<ov::Output<const ov::Node>> &get_inputs() const override
Gets inputs for infer request.
- Returns
vector of input ports
-
virtual const std::vector<ov::Output<const ov::Node>> &get_outputs() const override
Gets outputs for infer request.
- Returns
vector of output ports
-
virtual const std::shared_ptr<const ov::ICompiledModel> &get_compiled_model() const override
Gets pointer to compiled model (usually synchronous request holds the compiled model)
- Returns
Pointer to the compiled model
-
class ISyncInferRequest : public ov::IInferRequest