Model Caching Overview

Introduction

As described in the Integrate OpenVINO™ with Your Application, a common application flow consists of the following steps:

  1. Create a Core object : First step to manage available devices and read model objects

  2. Read the Intermediate Representation : Read an Intermediate Representation file into an object of the ov::Model

  3. Prepare inputs and outputs : If needed, manipulate precision, memory layout, size or color format

  4. Set configuration : Pass device-specific loading configurations to the device

  5. Compile and Load Network to device : Use the ov::Core::compile_model() method with a specific device

  6. Set input data : Specify input tensor

  7. Execute : Carry out inference and process results

Step 5 can potentially perform several time-consuming device-specific optimizations and network compilations, and such delays can lead to a bad user experience on application startup. To avoid this, some devices offer import/export network capability, and it is possible to either use the Compile tool or enable model caching to export compiled model automatically. Reusing cached model can significantly reduce compile model time.

Set “cache_dir” config option to enable model caching

To enable model caching, the application must specify a folder to store cached blobs, which is done like this:

ov::Core core;                                              // Step 1: create ov::Core object
core.set_property(ov::cache_dir("/path/to/cache/dir"));     // Step 1b: Enable caching
auto model = core.read_model(modelPath);                    // Step 2: Read Model
//...                                                       // Step 3: Prepare inputs/outputs
//...                                                       // Step 4: Set device configuration
auto compiled = core.compile_model(model, device, config);  // Step 5: LoadNetwork
core = Core()
core.set_property({'CACHE_DIR': '/path/to/cache/dir'})
model = core.read_model(model=xml_path)
compiled_model = core.compile_model(model=model, device_name=device_name)

With this code, if the device specified by device_name supports import/export model capability, a cached blob is automatically created inside the /path/to/cache/dir folder. If the device does not support import/export capability, cache is not created and no error is thrown.

Depending on your device, total time for compiling model on application startup can be significantly reduced. Also note that the very first compile_model (when cache is not yet created) takes slightly longer time to “export” the compiled blob into a cache file:

_images/caching_enabled.png

Even faster: use compile_model(modelPath)

In some cases, applications do not need to customize inputs and outputs every time. Such application always call model = core.read_model(...), then core.compile_model(model, ..) and it can be further optimized. For these cases, there is a more convenient API to compile the model in a single call, skipping the read step:

ov::Core core;                                                  // Step 1: create ov::Core object
auto compiled = core.compile_model(modelPath, device, config);  // Step 2: Compile model by file path
core = Core()
compiled_model = core.compile_model(model_path=xml_path, device_name=device_name)

With model caching enabled, total load time is even smaller, if read_model is optimized as well.

ov::Core core;                                                  // Step 1: create ov::Core object
core.set_property(ov::cache_dir("/path/to/cache/dir"));         // Step 1b: Enable caching
auto compiled = core.compile_model(modelPath, device, config);  // Step 2: Compile model by file path
core = Core()
core.set_property({'CACHE_DIR': '/path/to/cache/dir'})
compiled_model = core.compile_model(model_path=xml_path, device_name=device_name)
_images/caching_times.png

Advanced Examples

Not every device supports network import/export capability. For those that don’t, enabling caching has no effect. To check in advance if a particular device supports model caching, your application can use the following code:

// Get list of supported device capabilities
std::vector<std::string> caps = core.get_property(deviceName, ov::device::capabilities);

// Find 'EXPORT_IMPORT' capability in supported capabilities
bool cachingSupported = std::find(caps.begin(), caps.end(), ov::device::capability::EXPORT_IMPORT) != caps.end();
# Find 'EXPORT_IMPORT' capability in supported capabilities
caching_supported = 'EXPORT_IMPORT' in core.get_property(device_name, 'OPTIMIZATION_CAPABILITIES')

Note

The GPU plugin does not have the EXPORT_IMPORT capability, and does not support model caching yet. However, the GPU plugin supports caching kernels (see the GPU plugin documentation). Kernel caching for the GPU plugin can be accessed the same way as model caching: by setting the CACHE_DIR configuration key to a folder where the cache should be stored.