OpenVINO™ Inference Request

OpenVINO™ Runtime uses Infer Request mechanism which allows running models on different devices in asynchronous or synchronous manners. The ov::InferRequest class is used for this purpose inside the OpenVINO™ Runtime. This class allows you to set and get data for model inputs, outputs and run inference for the model.

Creating Infer Request

The ov::InferRequest can be created from the ov::CompiledModel :

auto infer_request = compiled_model.create_infer_request();
infer_request = compiled_model.create_infer_request()

Run Inference

The ov::InferRequest supports synchronous and asynchronous modes for inference.

Synchronous Mode

You can use ov::InferRequest::infer, which blocks the application execution, to infer a model in the synchronous mode:

infer_request.infer();
infer_request.infer()

Asynchronous Mode

The asynchronous mode can improve application’s overall frame-rate, by making it work on the host while the accelerator is busy, instead of waiting for inference to complete. To infer a model in the asynchronous mode, use ov::InferRequest::start_async :

infer_request.start_async();
infer_request.start_async()

Asynchronous mode supports two ways the application waits for inference results:

  • ov::InferRequest::wait_for - specifies the maximum duration in milliseconds to block the method. The method is blocked until the specified time has passed, or the result becomes available, whichever comes first.

    infer_request.wait_for(std::chrono::milliseconds(10));
    infer_request.wait_for(10)
  • ov::InferRequest::wait - waits until inference result becomes available

    infer_request.wait();
    infer_request.wait()

    Both methods are thread-safe.

When you are running several inference requests in parallel, a device can process them simultaneously, with no guarantees on the completion order. This may complicate a possible logic based on the ov::InferRequest::wait (unless your code needs to wait for the all requests). For multi-request scenarios, consider using the ov::InferRequest::set_callback method to set a callback which is called upon completion of the request:

infer_request.set_callback([&](std::exception_ptr ex_ptr) {
    if (!ex_ptr) {
        // all done. Output data can be processed.
        // You can fill the input data and run inference one more time:
        infer_request.start_async();
    } else {
        // Something wrong, you can analyze exception_ptr
    }
});
def callback(request, userdata):
    request.start_async()

infer_request.set_callback(callback)

Note

Use weak reference of infer_request (ov::InferRequest\*, ov::InferRequest&, std::weal_ptr<ov::InferRequest>, etc.) in the callback. It is necessary to avoid cyclic references.

For more details, see the Classification Async Sample.

You can use the ov::InferRequest::cancel method if you want to abort execution of the current inference request:

infer_request.cancel();
infer_request.cancel()

Working with Input and Output tensors

ov::InferRequest allows you to get input/output tensors by tensor name, index, port, and without any arguments, if a model has only one input or output.

Examples of Infer Request Usages

Presented below are examples of what the Infer Request can be used for.

Cascade of Models

ov::InferRequest can be used to organize a cascade of models. Infer Requests are required for each model. In this case, you can get the output tensor from the first request, using ov::InferRequest::get_tensor and set it as input for the second request, using ov::InferRequest::set_tensor. Keep in mind that tensors shared across compiled models can be rewritten by the first model if the first infer request is run once again, while the second model has not started yet.

auto output = infer_request1.get_output_tensor(0);
infer_request2.set_input_tensor(0, output);
output = infer_request1.get_output_tensor(0)
infer_request2.set_input_tensor(0, output)

Using of ROI Tensors

It is possible to re-use shared input in several models. You do not need to allocate a separate input tensor for a model if it processes a ROI object located inside of an already allocated input of a previous model. For instance, when the first model detects objects in a video frame (stored as an input tensor) and the second model accepts detected bounding boxes (ROI inside of the frame) as input. In this case, it is allowed to re-use a pre-allocated input tensor (used by the first model) by the second model and just crop ROI without allocation of new memory, using ov::Tensor with passing ov::Tensor and ov::Coordinate as parameters.

/\*\* input_tensor points to input of a previous network and
    cropROI contains coordinates of output bounding box \*\*/
ov::Tensor input_tensor(ov::element::f32, ov::Shape({1, 3, 20, 20}));
ov::Coordinate begin({0, 0, 0, 0});
ov::Coordinate end({1, 2, 3, 3});
//...

/\*\* roi_tensor uses shared memory of input_tensor and describes cropROI
    according to its coordinates \*\*/
ov::Tensor roi_tensor(input_tensor, begin, end);
infer_request2.set_tensor("input_name", roi_tensor);
# input_tensor points to input of a previous network and
# cropROI contains coordinates of output bounding box \*\*/
input_tensor = ov.Tensor(type=ov.Type.f32, shape=ov.Shape([1, 3, 20, 20]))
begin = [0, 0, 0, 0]
end = [1, 2, 3, 3]
# ...

# roi_tensor uses shared memory of input_tensor and describes cropROI
# according to its coordinates \*\*/
roi_tensor = ov.Tensor(input_tensor, begin, end)
infer_request2.set_tensor("input_name", roi_tensor)

Using Remote Tensors

By using ov::RemoteContext you can create a remote tensor to work with remote device memory.

ov::RemoteContext context = core.get_default_context("GPU");
auto input_port = compiled_model.input("tensor_name");
ov::RemoteTensor remote_tensor = context.create_tensor(input_port.get_element_type(), input_port.get_shape());
infer_request.set_tensor(input_port, remote_tensor);
# NOT SUPPORTED