High-level Performance Hints

Each of OpenVINO’s supported devices offers low-level performance settings. Tweaking this detailed configuration requires deep architecture understanding. Also, while the performance may be optimal for the specific combination of the device and the inferred model, the resulting configuration is not necessarily optimal for another device or model. The OpenVINO performance hints are the new way to configure the performance with portability in mind.

The hints also “reverse” the direction of the configuration in the right fashion: rather than map the application needs to the low-level performance settings, and keep an associated application logic to configure each possible device separately, the idea is to express a target scenario with a single config key and let the device to configure itself in response. As the hints are supported by every OpenVINO device, this is a completely portable and future-proof solution that is fully compatible with the automatic device selection.

Previously, a certain level of automatic configuration was coming from the default values of the parameters. For example, number of the CPU streams was deduced from the number of CPU cores, when the ov::streams::AUTO (CPU_THROUGHPUT_AUTO in the pre-OpenVINO 2.0 parlance) is set. However, the resulting number of streams didn’t account for actual compute requirements of the model to be inferred. The hints, in contrast, respect the actual model, so the parameters for optimal throughput are calculated for each model individually (based on it’s compute versus memory bandwidth requirements and capabilities of the device).

Performance Hints: Latency and Throughput

As discussed in the Optimization Guide there are a few different metrics associated with inference speed. Throughput and latency are some of the most widely used metrics that measure the overall performance of an application.

This is why, to ease the configuration of the device, the OpenVINO already offers two dedicated hints, namely ov::hint::PerformanceMode::THROUGHPUT and ov::hint::PerformanceMode::LATENCY. A special ov::hint::PerformanceMode::UNDEFINED acts the same as specifying no hint.

Please also see the last section in the document on conducting performance measurements with the benchmark_app`.

Note that a typical model may take significantly more time to load with ov::hint::PerformanceMode::THROUGHPUT and consume much more memory, compared with ov::hint::PerformanceMode::LATENCY.

Performance Hints: How It Works?

Internally, every device “translates” the value of the hint to the actual performance settings. For example the ov::hint::PerformanceMode::THROUGHPUT selects number of CPU or GPU streams. For the GPU, additionally the optimal batch size is selected and the automatic batching is applied whenever possible (and also if the device supports that refer to the devices/features support matrix).

The resulting (device-specific) settings can be queried back from the instance of the ov:Compiled_Model.

Notice that the benchmark_app, outputs the actual settings for the THROUGHPUT hint, please the bottom of the output example:

$benchmark_app -hint tput -d CPU -m 'path to your favorite model'
...
[Step 8/11] Setting optimal runtime parameters
[ INFO ] Device: CPU
[ INFO ]   { PERFORMANCE_HINT , THROUGHPUT }
...
[ INFO ]   { OPTIMAL_NUMBER_OF_INFER_REQUESTS , 4 }
[ INFO ]   { NUM_STREAMS , 4 }
...

Using the Performance Hints: Basic API

In the example code-snippet below the ov::hint::PerformanceMode::THROUGHPUT is specified for the ov::hint::performance_mode property for the compile_model:

auto compiled_model = core.compile_model(model, "GPU",
    ov::hint::performance_mode(ov::hint::PerformanceMode::THROUGHPUT));
compiled_model = core.compile_model(model, "GPU", {"PERFORMANCE_HINT": "THROUGHPUT"})

Additional (Optional) Hints from the App

Let’s take an example of an application that processes 4 video streams. The most future-proof way to communicate the limitation of the parallel slack is to equip the performance hint with the optional ov::hint::num_requests configuration key set to 4. As discussed previosly, for the GPU this will limit the batch size, for the CPU - the number of inference streams, so each device uses the ov::hint::num_requests while converting the hint to the actual device configuration options:

// limiting the available parallel slack for the 'throughput' hint via the ov::hint::num_requests
// so that certain parameters (like selected batch size) are automatically accommodated accordingly 
auto compiled_model = core.compile_model(model, "GPU",
    ov::hint::performance_mode(ov::hint::PerformanceMode::THROUGHPUT),
    ov::hint::num_requests(4));
config = {"PERFORMANCE_HINT": "THROUGHPUT",
          "PERFORMANCE_HINT_NUM_REQUESTS": "4"}
# limiting the available parallel slack for the 'throughput'
# so that certain parameters (like selected batch size) are automatically accommodated accordingly
compiled_model = core.compile_model(model, "GPU", config)

Optimal Number of Inference Requests

Using the hints assumes that the application queries the ov::optimal_number_of_infer_requests to create and run the returned number of requests simultaneously:

// when the batch size is automatically selected by the implementation
// it is important to query/create and run the sufficient #requests
auto compiled_model = core.compile_model(model, "GPU",
    ov::hint::performance_mode(ov::hint::PerformanceMode::THROUGHPUT));
auto num_requests = compiled_model.get_property(ov::optimal_number_of_infer_requests);
# when the batch size is automatically selected by the implementation
# it is important to query/create and run the sufficient requests
compiled_model = core.compile_model(model, "GPU", {"PERFORMANCE_HINT": "THROUGHPUT"})
num_requests = compiled_model.get_property("OPTIMAL_NUMBER_OF_INFER_REQUESTS")

While an application is free to create more requests if needed (for example to support asynchronous inputs population) it is very important to at least run the ov::optimal_number_of_infer_requests of the inference requests in parallel, for efficiency (device utilization) reasons.

Also, notice that ov::hint::PerformanceMode::LATENCY does not necessarily imply using single inference request. For example, multi-socket CPUs can deliver as high number of requests (at the same minimal latency) as there are NUMA nodes the machine features. To make your application fully scalable, prefer to query the ov::optimal_number_of_infer_requests directly.

Prefer Async API

The API of the inference requests offers Sync and Async execution. While the ov::InferRequest::infer() is inherently synchronous and simple to operate (as it serializes the execution flow in the current application thread), the Async “splits” the infer() into ov::InferRequest::start_async() and use of the ov::InferRequest::wait() (or callbacks). Please consider the API examples. Although the Synchronous API can be somewhat easier to start with, in the production code always prefer to use the Asynchronous (callbacks-based) API, as it is the most general and scalable way to implement the flow control for any possible number of requests (and hence both latency and throughput scenarios).

Combining the Hints and Individual Low-Level Settings

While sacrificing the portability at a some extent, it is possible to combine the hints with individual device-specific settings. For example, you can let the device prepare a configuration ov::hint::PerformanceMode::THROUGHPUT while overriding any specific value:

{
    // high-level performance hints are compatible with low-level device-specific settings 
auto compiled_model = core.compile_model(model, "CPU",
    ov::hint::performance_mode(ov::hint::PerformanceMode::THROUGHPUT),
    ov::inference_num_threads(4));
}
config = {"PERFORMANCE_HINT": "THROUGHPUT",
          "INFERENCE_NUM_THREADS": "4"}
# limiting the available parallel slack for the 'throughput'
# so that certain parameters (like selected batch size) are automatically accommodated accordingly
compiled_model = core.compile_model(model, "CPU", config)

Testing the Performance of The Hints with the Benchmark_App

The benchmark_app, that exists in both C++ and Python versions, is the best way to evaluate the performance of the performance hints for a particular device:

  • benchmark_app -hint tput -d ‘device’ -m ‘path to your model’

  • benchmark_app -hint latency -d ‘device’ -m ‘path to your model’

Disabling the hints to emulate the pre-hints era (highly recommended before trying the individual low-level settings, such as the number of streams as below, threads, etc):

    • benchmark_app -hint none -nstreams 1 -d ‘device’ -m ‘path to your model’