OpenVINO™ Inference Request

OpenVINO™ Runtime uses Infer Request mechanism which allows running models on different devices in asynchronous or synchronous manners. The ov::InferRequest class is used for this purpose inside the OpenVINO™ Runtime. This class allows you to set and get data for model inputs, outputs and run inference for the model.

Creating Infer Request

The ov::InferRequest can be created from the ov::CompiledModel:

infer_request = compiled_model.create_infer_request()
auto infer_request = compiled_model.create_infer_request();

Run Inference

The ov::InferRequest supports synchronous and asynchronous modes for inference.

Synchronous Mode

You can use ov::InferRequest::infer, which blocks the application execution, to infer a model in the synchronous mode:

infer_request.infer()
infer_request.infer();

Asynchronous Mode

The asynchronous mode can improve application’s overall frame-rate, by making it work on the host while the accelerator is busy, instead of waiting for inference to complete. To infer a model in the asynchronous mode, use ov::InferRequest::start_async:

infer_request.start_async()
infer_request.start_async();

Asynchronous mode supports two ways the application waits for inference results:

  • ov::InferRequest::wait_for - specifies the maximum duration in milliseconds to block the method. The method is blocked until the specified time has passed, or the result becomes available, whichever comes first.

    infer_request.wait_for(10)
    
    infer_request.wait_for(std::chrono::milliseconds(10));
    
  • ov::InferRequest::wait - waits until inference result becomes available

    infer_request.wait()
    
    infer_request.wait();
    

Both methods are thread-safe.

When you are running several inference requests in parallel, a device can process them simultaneously, with no guarantees on the completion order. This may complicate a possible logic based on the ov::InferRequest::wait (unless your code needs to wait for the all requests). For multi-request scenarios, consider using the ov::InferRequest::set_callback method to set a callback which is called upon completion of the request:

def callback(request, _):
    request.start_async()

callbacks_info = {}
callbacks_info["finished"] = 0
infer_request.set_callback([&](std::exception_ptr ex_ptr) { 
    if (!ex_ptr) {
        // all done. Output data can be processed.
        // You can fill the input data and run inference one more time:
        infer_request.start_async();
    } else {
        // Something wrong, you can analyze exception_ptr
    }
});

Note

Use weak reference of infer_request (ov::InferRequest*, ov::InferRequest&, std::weal_ptr<ov::InferRequest>, etc.) in the callback. It is necessary to avoid cyclic references.

For more details, see the Classification Async Sample.

You can use the ov::InferRequest::cancel method if you want to abort execution of the current inference request:

infer_request.cancel()
infer_request.cancel();

Working with Input and Output tensors

ov::InferRequest allows you to get input/output tensors by tensor name, index, port, and without any arguments, if a model has only one input or output.

  • ov::InferRequest::get_input_tensor, ov::InferRequest::set_input_tensor, ov::InferRequest::get_output_tensor, ov::InferRequest::set_output_tensor methods without arguments can be used to get or set input/output tensor for a model with only one input/output:

    input_tensor = infer_request.get_input_tensor()
    output_tensor = infer_request.get_output_tensor()
    
    auto input_tensor = infer_request.get_input_tensor();
    auto output_tensor = infer_request.get_output_tensor();
    
  • ov::InferRequest::get_input_tensor, ov::InferRequest::set_input_tensor, ov::InferRequest::get_output_tensor, ov::InferRequest::set_output_tensor methods with argument can be used to get or set input/output tensor by input/output index:

    input_tensor = infer_request.get_input_tensor(0)
    output_tensor = infer_request.get_output_tensor(0)
    
    auto input_tensor = infer_request.get_input_tensor(0);
    auto output_tensor = infer_request.get_output_tensor(1);
    
  • ov::InferRequest::get_tensor, ov::InferRequest::set_tensor methods can be used to get or set input/output tensor by tensor name:

    tensor1 = infer_request.get_tensor("result")
    tensor2 = ov.Tensor(ov.Type.f32, [1, 3, 32, 32])
    infer_request.set_tensor(input_tensor_name, tensor2)
    
    auto tensor1 = infer_request.get_tensor("tensor_name1");
    ov::Tensor tensor2;
    infer_request.set_tensor("tensor_name2", tensor2);
    
  • ov::InferRequest::get_tensor, ov::InferRequest::set_tensor methods can be used to get or set input/output tensor by port:

    input_port = model.input(0)
    output_port = model.input(input_tensor_name)
    input_tensor = ov.Tensor(ov.Type.f32, [1, 3, 32, 32])
    infer_request.set_tensor(input_port, input_tensor)
    output_tensor = infer_request.get_tensor(output_port)
    
    auto input_port = model->input(0);
    auto output_port = model->output("tensor_name");
    ov::Tensor input_tensor;
    infer_request.set_tensor(input_port, input_tensor);
    auto output_tensor = infer_request.get_tensor(output_port);
    

Examples of Infer Request Usages

Presented below are examples of what the Infer Request can be used for.

Cascade of Models

ov::InferRequest can be used to organize a cascade of models. Infer Requests are required for each model. In this case, you can get the output tensor from the first request, using ov::InferRequest::get_tensor and set it as input for the second request, using ov::InferRequest::set_tensor. Keep in mind that tensors shared across compiled models can be rewritten by the first model if the first infer request is run once again, while the second model has not started yet.

output = infer_request1.get_output_tensor(0)
infer_request2.set_input_tensor(0, output)
auto output = infer_request1.get_output_tensor(0);
infer_request2.set_input_tensor(0, output);

Using of ROI Tensors

It is possible to re-use shared input in several models. You do not need to allocate a separate input tensor for a model if it processes a ROI object located inside of an already allocated input of a previous model. For instance, when the first model detects objects in a video frame (stored as an input tensor) and the second model accepts detected bounding boxes (ROI inside of the frame) as input. In this case, it is allowed to re-use a pre-allocated input tensor (used by the first model) by the second model and just crop ROI without allocation of new memory, using ov::Tensor with passing ov::Tensor and ov::Coordinate as parameters.

# input_tensor points to input of a previous network and
# cropROI contains coordinates of output bounding box **/
input_tensor = ov.Tensor(type=ov.Type.f32, shape=ov.Shape([1, 3, 100, 100]))
begin = [0, 0, 0, 0]
end = [1, 3, 32, 32]
# ...

/** input_tensor points to input of a previous network and
    cropROI contains coordinates of output bounding box **/
ov::Tensor input_tensor(ov::element::f32, ov::Shape({1, 3, 20, 20}));
ov::Coordinate begin({0, 0, 0, 0});
ov::Coordinate end({1, 2, 3, 3});
//...

Using Remote Tensors

By using ov::RemoteContext you can create a remote tensor to work with remote device memory.

# NOT SUPPORTED
ov::RemoteContext context = core.get_default_context("GPU");
auto input_port = compiled_model.input("tensor_name");