Model Caching Overview#
As described in Integrate OpenVINO™ with Your Application, a common application flow consists of the following steps:
- Create a Core object:First step to manage available devices and read model objects
- Read the Intermediate Representation:Read an Intermediate Representation file into an object of the ov::Model
- Prepare inputs and outputs:If needed, manipulate precision, memory layout, size or color format
- Set configuration:Pass device-specific loading configurations to the device
- Compile and Load Network to device:Use the ov::Core::compile_model() method with a specific device
- Set input data:Specify input tensor
- Execute:Carry out inference and process results
Step 5 can potentially perform several time-consuming device-specific optimizations and network compilations. To reduce the resulting delays at application startup, you can use Model Caching. It exports the compiled model automatically and reuses it to significantly reduce the model compilation time.
Important
Not all devices support the network import/export feature. They will perform normally but will not enable the compilation stage speed-up.
Set “cache_dir” config option to enable model caching#
To enable model caching, the application must specify a folder to store the cached blobs:
from utils import get_path_to_model, get_temp_dir
import openvino as ov
import openvino.properties as props
# For example: "CPU", "GPU", "NPU".
device_name = 'CPU'
model_path = get_path_to_model()
path_to_cache_dir = get_temp_dir()
core = ov.Core()
core.set_property({props.cache_dir: path_to_cache_dir})
model = core.read_model(model=model_path)
compiled_model = core.compile_model(model=model, device_name=device_name)
void part0() {
std::string modelPath = "/tmp/myModel.xml";
std::string device = "GPU"; // For example: "CPU", "GPU", "NPU".
ov::AnyMap config;
ov::Core core; // Step 1: create ov::Core object
core.set_property(ov::cache_dir("/path/to/cache/dir")); // Step 1b: Enable caching
auto model = core.read_model(modelPath); // Step 2: Read Model
//... // Step 3: Prepare inputs/outputs
//... // Step 4: Set device configuration
auto compiled = core.compile_model(model, device, config); // Step 5: LoadNetwork
With this code, if the device specified by device_name
supports import/export model capability,
a cached blob (the .cl_cache
and .blob
file for GPU and CPU respectively) is automatically
created inside the /path/to/cache/dir
folder.
If the device does not support the import/export capability, cache is not created and no error is thrown.
Note that the first compile_model
operation takes slightly longer, as the cache needs to be created -
the compiled blob is saved into a cache file:
Make it even faster: use compile_model(modelPath)#
In some cases, applications do not need to customize inputs and outputs every time. Such application always
call model = core.read_model(...)
, then core.compile_model(model, ..)
, which can be further optimized.
For these cases, there is a more convenient API to compile the model in a single call, skipping the read step:
core = ov.Core()
compiled_model = core.compile_model(model=model_path, device_name=device_name)
ov::Core core; // Step 1: create ov::Core object
auto compiled = core.compile_model(modelPath, device, config); // Step 2: Compile model by file path
With model caching enabled, the total load time is even shorter, if read_model
is optimized as well.
core = ov.Core()
core.set_property({props.cache_dir: path_to_cache_dir})
compiled_model = core.compile_model(model=model_path, device_name=device_name)
ov::Core core; // Step 1: create ov::Core object
core.set_property(ov::cache_dir("/path/to/cache/dir")); // Step 1b: Enable caching
auto compiled = core.compile_model(modelPath, device, config); // Step 2: Compile model by file path
Advanced Examples#
Not every device supports the network import/export capability. For those that don’t, enabling caching has no effect. To check in advance if a particular device supports model caching, your application can use the following code:
import openvino.properties.device as device
# Find 'EXPORT_IMPORT' capability in supported capabilities
caching_supported = 'EXPORT_IMPORT' in core.get_property(device_name, device.capabilities)
// Get list of supported device capabilities
std::vector<std::string> caps = core.get_property(deviceName, ov::device::capabilities);
// Find 'EXPORT_IMPORT' capability in supported capabilities
bool cachingSupported = std::find(caps.begin(), caps.end(), ov::device::capability::EXPORT_IMPORT) != caps.end();
Set “cache_encryption_callbacks” config option to enable cache encryption#
If model caching is enabled, the model topology can be encrypted when saving to the cache and decrypted when loading from the cache. This property can currently be set only in compile_model
.
import base64
def encrypt_base64(src):
return base64.b64encode(bytes(src, "utf-8"))
def decrypt_base64(src):
return base64.b64decode(bytes(src, "utf-8"))
core = ov.Core()
core.set_property({props.cache_dir: path_to_cache_dir})
config_cache = {}
config_cache["CACHE_ENCRYPTION_CALLBACKS"] = [encrypt_base64, decrypt_base64]
model = core.read_model(model=model_path)
compiled_model = core.compile_model(model=model, device_name=device_name, config=config_cache)
ov::AnyMap config;
ov::EncryptionCallbacks encryption_callbacks;
static const char codec_key[] = {0x30, 0x60, 0x70, 0x02, 0x04, 0x08, 0x3F, 0x6F, 0x72, 0x74, 0x78, 0x7F};
auto codec_xor = [&](const std::string& source_str) {
auto key_size = sizeof(codec_key);
int key_idx = 0;
std::string dst_str = source_str;
for (char& c : dst_str) {
c ^= codec_key[key_idx % key_size];
key_idx++;
}
return dst_str;
};
encryption_callbacks.encrypt = codec_xor;
encryption_callbacks.decrypt = codec_xor;
config.insert(ov::cache_encryption_callbacks(encryption_callbacks)); // Step 4: Set device configuration
auto compiled = core.compile_model(model, device, config); // Step 5: LoadNetwork
Important
Currently, this property is supported only by the CPU plugin. For other HW plugins, setting this property will not encrypt/decrypt the model topology in cache and will not affect performance.