OpenVINO™ Inference Request

OpenVINO™ Runtime uses Infer Request mechanism which allows to run models on different devices in asynchronous or synchronous manners. ov::InferRequest class is used for this purpose inside the OpenVINO™ Runtime. This class allows to set and get data for model inputs, outputs and run inference for the model.

Creating Infer Request

ov::InferRequest can be created from the ov::CompiledModel :

auto infer_request = compiled_model.create_infer_request();
infer_request = compiled_model.create_infer_request()

Run inference

ov::InferRequest supports synchronous and asynchronous modes for inference.

Synchronous mode

You can use ov::InferRequest::infer, which blocks the application execution, to infer model in the synchronous mode:

infer_request.infer();
infer_request.infer()

Asynchronous mode

Asynchronous mode can improve application’s overall frame-rate, because rather than wait for inference to complete, the app can keep working on the host, while the accelerator is busy. You can use ov::InferRequest::start_async to infer model in the asynchronous mode:

infer_request.start_async();
infer_request.start_async()

Asynchronous mode supports two ways the application waits for inference results:

  • ov::InferRequest::wait_for - specifies the maximum duration in milliseconds to block the method. The method is blocked until the specified time has passed, or the result becomes available, whichever comes first.

    infer_request.wait_for(std::chrono::milliseconds(10));
    infer_request.wait_for(10)
  • ov::InferRequest::wait - waits until inference result becomes available

    infer_request.wait();
    infer_request.wait()

    Both methods are thread-safe.

When you are running several inference requests in parallel, a device can process them simultaneously, with no garauntees on the completion order. This may complicate a possible logic based on the ov::InferRequest::wait (unless your code needs to wait for the all requests). For multi-request scenarios, consider using the ov::InferRequest::set_callback method to set a callback which is called upon completion of the request:

infer_request.set_callback([&](std::exception_ptr ex_ptr) {
    if (!ex_ptr) {
        // all done. Output data can be processed.
        // You can fill the input data and run inference one more time:
        infer_request.start_async();
    } else {
        // Something wrong, you can analyze exception_ptr
    }
});
def callback(request, userdata):
    request.start_async()

infer_request.set_callback(callback)

Note

Use weak reference of infer_request (ov::InferRequest\*, ov::InferRequest&, std::weal_ptr<ov::InferRequest>, etc.) in the callback. It is necessary to avoid cyclic references.

For more details, check Classification Sample Async.

You can use the ov::InferRequest::cancel method if you want to abort execution of the current inference request:

infer_request.cancel();
infer_request.cancel()

Working with Input and Output tensors

ov::InferRequest allows to get input/output tensors by tensor name, index, port and without any arguments in case if model has only one input or output.

Cascade of models

ov::InferRequest can be used to organize cascade of models. You need to have infer requests for each model. In this case you can get output tensor from the first request using ov::InferRequest::get_tensor and set it as input for the second request using ov::InferRequest::set_tensor. But be careful, shared tensors across compiled models can be rewritten by the first model if the first infer request is run once again, while the second model has not started yet.

auto output = infer_request1.get_output_tensor(0);
infer_request2.set_input_tensor(0, output);
output = infer_request1.get_output_tensor(0)
infer_request2.set_input_tensor(0, output)

Using of ROI tensors

It is possible to re-use shared input by several models. You do not need to allocate separate input tensor for a model if it processes a ROI object located inside of already allocated input of a previous model. For instance, when the first model detects objects in a video frame (stored as input tensor) and the second model accepts detected bounding boxes (ROI inside of the frame) as input. In this case, it is allowed to re-use pre-allocated input tensor (used by the first model) by the second model and just crop ROI without allocation of new memory using ov::Tensor with passing of ov::Tensor and ov::Coordinate as parameters.

/\*\* input_tensor points to input of a previous network and
    cropROI contains coordinates of output bounding box \*\*/
ov::Tensor input_tensor(ov::element::f32, ov::Shape({1, 3, 20, 20}));
ov::Coordinate begin({0, 0, 0, 0});
ov::Coordinate end({1, 2, 3, 3});
//...

/\*\* roi_tensor uses shared memory of input_tensor and describes cropROI
    according to its coordinates \*\*/
ov::Tensor roi_tensor(input_tensor, begin, end);
infer_request2.set_tensor("input_name", roi_tensor);
# input_tensor points to input of a previous network and
# cropROI contains coordinates of output bounding box \*\*/
input_tensor = ov.Tensor(type=ov.Type.f32, shape=ov.Shape([1, 3, 20, 20]))
begin = [0, 0, 0, 0]
end = [1, 2, 3, 3]
# ...

# roi_tensor uses shared memory of input_tensor and describes cropROI
# according to its coordinates \*\*/
roi_tensor = ov.Tensor(input_tensor, begin, end)
infer_request2.set_tensor("input_name", roi_tensor)

Using of remote tensors

You can create a remote tensor to work with remote device memory. ov::RemoteContext allows to create remote tensor.

ov::RemoteContext context = core.get_default_context("GPU");
auto input_port = compiled_model.input("tensor_name");
ov::RemoteTensor remote_tensor = context.create_tensor(input_port.get_element_type(), input_port.get_shape());
infer_request.set_tensor(input_port, remote_tensor);
# NOT SUPPORTED