Integrate OpenVINO™ with Your Application

Following these steps, you can implement a typical OpenVINO™ Runtime inference pipeline in your application. Before proceeding, make sure you have installed OpenVINO Runtime and set environment variables (run <INSTALL_DIR>/setupvars.sh for Linux or setupvars.bat for Windows, otherwise, the OpenVINO_DIR variable won’t be configured properly to pass find_package calls).

_images/IMPLEMENT_PIPELINE_with_API_C.svg

Step 1. Create OpenVINO Runtime Core

Include next files to work with OpenVINO™ Runtime:

import openvino as ov
#include <openvino/openvino.hpp>
#include <openvino/c/openvino.h>

Use the following code to create OpenVINO™ Core to manage available devices and read model objects:

core = ov.Core()
ov::Core core;
ov_core_t* core = NULL;
ov_core_create(&core);

Step 2. Compile the Model

ov::CompiledModel class represents a device specific compiled model. ov::CompiledModel allows you to get information inputs or output ports by a tensor name or index. This approach is aligned with the majority of frameworks.

Compile the model for a specific device using ov::Core::compile_model():

compiled_model = core.compile_model("model.xml", "AUTO")
compiled_model = core.compile_model("model.onnx", "AUTO")
compiled_model = core.compile_model("model.pdmodel", "AUTO")
compiled_model = core.compile_model("model.pb", "AUTO")
compiled_model = core.compile_model("model.tflite", "AUTO")
def create_model():
    # This example shows how to create ov::Function
    #
    # To construct a model, please follow 
    # https://docs.openvino.ai/latest/openvino_docs_OV_UG_Model_Representation.html
    data = ov.opset8.parameter([3, 1, 2], ov.Type.f32)
    res = ov.opset8.result(data)
    return ov.Model([res], [data], "model")

model = create_model()
compiled_model = core.compile_model(model, "AUTO")
ov::CompiledModel compiled_model = core.compile_model("model.xml", "AUTO");
ov::CompiledModel compiled_model = core.compile_model("model.onnx", "AUTO");
ov::CompiledModel compiled_model = core.compile_model("model.pdmodel", "AUTO");
ov::CompiledModel compiled_model = core.compile_model("model.pb", "AUTO");
ov::CompiledModel compiled_model = core.compile_model("model.tflite", "AUTO");
auto create_model = []() {
    std::shared_ptr<ov::Model> model;
    // To construct a model, please follow 
    // https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Model_Representation.html
    return model;
};
std::shared_ptr<ov::Model> model = create_model();
compiled_model = core.compile_model(model, "AUTO");
ov_compiled_model_t* compiled_model = NULL;
ov_core_compile_model_from_file(core, "model.xml", "AUTO", 0, &compiled_model);
ov_compiled_model_t* compiled_model = NULL;
ov_core_compile_model_from_file(core, "model.onnx", "AUTO", 0, &compiled_model);
ov_compiled_model_t* compiled_model = NULL;
ov_core_compile_model_from_file(core, "model.pdmodel", "AUTO", 0, &compiled_model);
ov_compiled_model_t* compiled_model = NULL;
ov_core_compile_model_from_file(core, "model.pb", "AUTO", 0, &compiled_model);
ov_compiled_model_t* compiled_model = NULL;
ov_core_compile_model_from_file(core, "model.tflite", "AUTO", 0, &compiled_model);
// Construct a model
ov_model_t* model = NULL;
ov_core_read_model(core, "model.xml", NULL, &model);
ov_compiled_model_t* compiled_model = NULL;
ov_core_compile_model(core, model, "AUTO", 0, &compiled_model);

The ov::Model object represents any models inside the OpenVINO™ Runtime. For more details please read article about OpenVINO™ Model representation.

The code above creates a compiled model associated with a single hardware device from the model object. It is possible to create as many compiled models as needed and use them simultaneously (up to the limitation of the hardware resources). To learn how to change the device configuration, read the Query device properties article.

Step 3. Create an Inference Request

ov::InferRequest class provides methods for model inference in OpenVINO™ Runtime. Create an infer request using the following code (see InferRequest detailed documentation for more details):

infer_request = compiled_model.create_infer_request()
ov::InferRequest infer_request = compiled_model.create_infer_request();
ov_infer_request_t* infer_request = NULL;
ov_compiled_model_create_infer_request(compiled_model, &infer_request);

Step 4. Set Inputs

You can use external memory to create ov::Tensor and use the ov::InferRequest::set_input_tensor method to put this tensor on the device:

# Create tensor from external memory
input_tensor = ov.Tensor(array=memory, shared_memory=True)
# Set input tensor for model with one input
infer_request.set_input_tensor(input_tensor)
// Get input port for model with one input
auto input_port = compiled_model.input();
// Create tensor from external memory
ov::Tensor input_tensor(input_port.get_element_type(), input_port.get_shape(), memory_ptr);
// Set input tensor for model with one input
infer_request.set_input_tensor(input_tensor);
// Get input port for model with one input
ov_output_const_port_t* input_port = NULL;
ov_model_const_input(model, &input_port);
// Get the input shape from input port
ov_shape_t input_shape;
ov_const_port_get_shape(input_port, &input_shape);
// Get the the type of input
ov_element_type_e input_type;
ov_port_get_element_type(input_port, &input_type);
// Create tensor from external memory
ov_tensor_t* tensor = NULL;
ov_tensor_create_from_host_ptr(input_type, input_shape, memory_ptr, &tensor);
// Set input tensor for model with one input
ov_infer_request_set_input_tensor(infer_request, tensor);

See additional materials to learn how to handle textual data as a model input.

Step 5. Start Inference

OpenVINO™ Runtime supports inference in either synchronous or asynchronous mode. Using the Async API can improve application’s overall frame-rate: instead of waiting for inference to complete, the app can keep working on the host while the accelerator is busy. You can use ov::InferRequest::start_async to start model inference in the asynchronous mode and call ov::InferRequest::wait to wait for the inference results:

infer_request.start_async()
infer_request.wait()
infer_request.start_async();
infer_request.wait();
ov_infer_request_start_async(infer_request);
ov_infer_request_wait(infer_request);

This section demonstrates a simple pipeline. To get more information about other ways to perform inference, read the dedicated “Run inference” section.

Step 6. Process the Inference Results

Go over the output tensors and process the inference results.

# Get output tensor for model with one output
output = infer_request.get_output_tensor()
output_buffer = output.data
# output_buffer[] - accessing output tensor data
// Get output tensor by tensor name
auto output = infer_request.get_tensor("tensor_name");
const float *output_buffer = output.data<const float>();
// output_buffer[] - accessing output tensor data
ov_tensor_t* output_tensor = NULL;
// Get output tensor by tensor index
ov_infer_request_get_output_tensor_by_index(infer_request, 0, &output_tensor);

See additional materials to learn how to handle textual data as a model output.

Step 7. Release the allocated objects (only for C)

To avoid memory leak, applications developed with C API need to release the allocated objects in order.

ov_shape_free(&input_shape);
ov_tensor_free(output_tensor);
ov_output_const_port_free(input_port);
ov_tensor_free(tensor);
ov_infer_request_free(infer_request);
ov_compiled_model_free(compiled_model);
ov_model_free(model);
ov_core_free(core);

Additional Resources