openvino.runtime.CompiledModel

class openvino.runtime.CompiledModel

Bases: openvino._pyopenvino.CompiledModel

CompiledModel class.

CompiledModel represents Model that is compiled for a specific device by applying multiple optimization transformations, then mapping to compute kernels.

__init__(self: openvino._pyopenvino.CompiledModel, other: openvino._pyopenvino.CompiledModel) None

Methods

__call__([inputs])

Callable infer wrapper for CompiledModel.

__delattr__(name, /)

Implement delattr(self, name).

__dir__()

Default dir() implementation.

__eq__(value, /)

Return self==value.

__format__(format_spec, /)

Default object formatter.

__ge__(value, /)

Return self>=value.

__getattribute__(name, /)

Return getattr(self, name).

__gt__(value, /)

Return self>value.

__hash__()

Return hash(self).

__init__(self, other)

__init_subclass__

This method is called when a class is subclassed.

__le__(value, /)

Return self<=value.

__lt__(value, /)

Return self<value.

__ne__(value, /)

Return self!=value.

__new__(**kwargs)

__reduce__()

Helper for pickle.

__reduce_ex__(protocol, /)

Helper for pickle.

__repr__(self)

__setattr__(name, value, /)

Implement setattr(self, name, value).

__sizeof__()

Size of object in memory, in bytes.

__str__()

Return str(self).

__subclasshook__

Abstract classes can override this to customize issubclass().

create_infer_request()

Creates an inference request object used to infer the compiled model.

export_model(*args, **kwargs)

Overloaded function.

get_property(self, property)

Gets properties for current compiled model.

get_runtime_model(self)

Gets runtime model information from a device.

infer_new_request([inputs])

Infers specified input(s) in synchronous mode.

input(*args, **kwargs)

Overloaded function.

output(*args, **kwargs)

Overloaded function.

set_property(*args, **kwargs)

Overloaded function.

Attributes

inputs

Gets all inputs of a compiled model.

outputs

Gets all outputs of a compiled model.

__call__(inputs: Optional[Union[dict, list]] = None) dict

Callable infer wrapper for CompiledModel.

Take a look at infer_new_request for reference.

__class__

alias of pybind11_builtins.pybind11_type

__delattr__(name, /)

Implement delattr(self, name).

__dir__()

Default dir() implementation.

__eq__(value, /)

Return self==value.

__format__(format_spec, /)

Default object formatter.

__ge__(value, /)

Return self>=value.

__getattribute__(name, /)

Return getattr(self, name).

__gt__(value, /)

Return self>value.

__hash__()

Return hash(self).

__init__(self: openvino._pyopenvino.CompiledModel, other: openvino._pyopenvino.CompiledModel) None
__init_subclass__()

This method is called when a class is subclassed.

The default implementation does nothing. It may be overridden to extend subclasses.

__le__(value, /)

Return self<=value.

__lt__(value, /)

Return self<value.

__ne__(value, /)

Return self!=value.

__new__(**kwargs)
__reduce__()

Helper for pickle.

__reduce_ex__(protocol, /)

Helper for pickle.

__repr__(self: openvino._pyopenvino.CompiledModel) str
__setattr__(name, value, /)

Implement setattr(self, name, value).

__sizeof__()

Size of object in memory, in bytes.

__str__()

Return str(self).

__subclasshook__()

Abstract classes can override this to customize issubclass().

This is invoked early on by abc.ABCMeta.__subclasscheck__(). It should return True, False or NotImplemented. If it returns NotImplemented, the normal algorithm is used. Otherwise, it overrides the normal algorithm (and the outcome is cached).

create_infer_request() openvino.runtime.ie_api.InferRequest

Creates an inference request object used to infer the compiled model.

The created request has allocated input and output tensors.

Returns

New InferRequest object.

Return type

openvino.runtime.InferRequest

export_model(*args, **kwargs)

Overloaded function.

  1. export_model(self: openvino._pyopenvino.CompiledModel) -> bytes

    Exports the compiled model to bytes/output stream.

    GIL is released while running this function.

    return

    Bytes object that contains this compiled model.

    rtype

    bytes

    user_stream = compiled.export_model()
    
    with open('./my_model', 'wb') as f:
        f.write(user_stream)
    
    # ...
    
    new_compiled = core.import_model(user_stream, "CPU")
    
  2. export_model(self: openvino._pyopenvino.CompiledModel, model_stream: object) -> None

    Exports the compiled model to bytes/output stream.

    Advanced version of export_model. It utilizes, streams from the standard Python library io.

    Function performs flushing of the stream, writes to it, and then rewinds the stream to the beginning (using seek(0)).

    GIL is released while running this function.

    param model_stream

    A stream object to which the model will be serialized.

    type model_stream

    io.BytesIO

    rtype

    None

    user_stream = io.BytesIO()
    compiled.export_model(user_stream)
    
    with open('./my_model', 'wb') as f:
        f.write(user_stream.getvalue()) # or read() if seek(0) was applied before
    
    # ...
    
    new_compiled = core.import_model(user_stream, "CPU")
    
get_property(self: openvino._pyopenvino.CompiledModel, property: str) object

Gets properties for current compiled model.

Parameters

name (str) – Property name.

Return type

Any

get_runtime_model(self: openvino._pyopenvino.CompiledModel) openvino._pyopenvino.Model

Gets runtime model information from a device.

This object (returned model) represents the internal device-specific model which is optimized for the particular accelerator. It contains device-specific nodes, runtime information, and can be used only to understand how the source model is optimized and which kernels, element types, and layouts are selected.

Returns

Model, containing Executable Graph information.

Return type

openvino.runtime.Model

infer_new_request(inputs: Optional[Union[dict, list, tuple, openvino._pyopenvino.Tensor, numpy.ndarray]] = None) dict

Infers specified input(s) in synchronous mode.

Blocks all methods of CompiledModel while request is running.

Method creates new temporary InferRequest and run inference on it. It is advised to use a dedicated InferRequest class for performance, optimizing workflows, and creating advanced pipelines.

The allowed types of keys in the inputs dictionary are:

  1. int

  2. str

  3. openvino.runtime.ConstOutput

The allowed types of values in the inputs are:

  1. numpy.array

  2. openvino.runtime.Tensor

Can be called with only one openvino.runtime.Tensor or numpy.array, it will work only with one-input models. When model has more inputs, function throws error.

Parameters

inputs (Union[Dict[keys, values], List[values], Tuple[values], Tensor, numpy.array], optional) – Data to be set on input tensors.

Returns

Dictionary of results from output tensors with ports as keys.

Return type

Dict[openvino.runtime.ConstOutput, numpy.array]

input(*args, **kwargs)

Overloaded function.

  1. input(self: openvino._pyopenvino.CompiledModel) -> openvino._pyopenvino.ConstOutput

    Gets a single input of a compiled model. If a model has more than one input, this method throws an exception.

    return

    A compiled model input.

    rtype

    openvino.runtime.ConstOutput

  2. input(self: openvino._pyopenvino.CompiledModel, index: int) -> openvino._pyopenvino.ConstOutput

    Gets input of a compiled model identified by an index. If the input with given index is not found, this method throws an exception.

    param index

    An input index.

    type index

    int

    return

    A compiled model input.

    rtype

    openvino.runtime.ConstOutput

  3. input(self: openvino._pyopenvino.CompiledModel, tensor_name: str) -> openvino._pyopenvino.ConstOutput

    Gets input of a compiled model identified by a tensor_name. If the input with given tensor name is not found, this method throws an exception.

    param tensor_name

    An input tensor name.

    type tensor_name

    str

    return

    A compiled model input.

    rtype

    openvino.runtime.ConstOutput

property inputs

Gets all inputs of a compiled model.

Returns

Inputs of a compiled model.

Return type

List[openvino.runtime.ConstOutput]

output(*args, **kwargs)

Overloaded function.

  1. output(self: openvino._pyopenvino.CompiledModel) -> openvino._pyopenvino.ConstOutput

    Gets a single output of a compiled model. If the model has more than one output, this method throws an exception.

    return

    A compiled model output.

    rtype

    openvino.runtime.ConstOutput

  2. output(self: openvino._pyopenvino.CompiledModel, index: int) -> openvino._pyopenvino.ConstOutput

    Gets output of a compiled model identified by an index. If the output with given index is not found, this method throws an exception.

    param index

    An output index.

    type index

    int

    return

    A compiled model output.

    rtype

    openvino.runtime.ConstOutput

  3. output(self: openvino._pyopenvino.CompiledModel, tensor_name: str) -> openvino._pyopenvino.ConstOutput

    Gets output of a compiled model identified by a tensor_name. If the output with given tensor name is not found, this method throws an exception.

    param tensor_name

    An output tensor name.

    type tensor_name

    str

    return

    A compiled model output.

    rtype

    openvino.runtime.ConstOutput

property outputs

Gets all outputs of a compiled model.

Returns

Outputs of a compiled model.

Return type

List[openvino.runtime.ConstOutput]

set_property(*args, **kwargs)

Overloaded function.

  1. set_property(self: openvino._pyopenvino.CompiledModel, properties: Dict[str, object]) -> None

    Sets properties for current compiled model.

    param properties

    Dict of pairs: (property name, property value)

    type properties

    dict

    rtype

    None

  2. set_property(self: openvino._pyopenvino.CompiledModel, property: Tuple[str, object]) -> None

    Sets properties for current compiled model.

    param property

    Tuple of (property name, matching property value).

    type property

    tuple