openvino.inference_engine.ExecutableNetwork

class openvino.inference_engine.ExecutableNetwork

This class represents a network instance loaded to plugin and ready for inference.

__init__()

There is no explicit class constructor. To make a valid instance of ExecutableNetwork, use IECore.load_network() method of the IECore class.

Methods

__init__

There is no explicit class constructor.

export(self, unicode model_file)

Exports the current executable network.

get_config(self, unicode config_name)

Gets configuration for current executable network.

get_exec_graph_info(self)

Gets executable graph information from a device

get_idle_request_id(self)

Get idle request ID

get_metric(self, unicode metric_name)

Gets general runtime metric for an executable network.

infer(self[, inputs])

Starts synchronous inference for the first infer request of the executable network and returns output data.

set_config(self, dict config)

start_async(self, request_id[, inputs])

Starts asynchronous inference for specified infer request.

wait(self[, num_requests, timeout])

Waits when the result from any request becomes available.

Attributes

input_info

A dictionary that maps input layer names to InputInfoCPtr objects

inputs

A dictionary that maps input layer names to DataPtr objects

outputs

A dictionary that maps output layer names to CDataPtr objects

requests

A tuple of InferRequest instances

export(self, unicode model_file: str)

Exports the current executable network.

Parameters

model_file – Full path to the target exported file location

Returns

None

ie = IECore()
net = ie.read_network(model=path_to_xml_file, weights=path_to_bin_file)
exec_net = ie.load_network(network=net, device_name="MYRIAD", num_requests=2)
exec_net.export(path_to_file_to_save)
get_config(self, unicode config_name: str)

Gets configuration for current executable network. The method is responsible to extract information which affects executable network execution

Parameters

config_name – A configuration parameter name to request.

Returns

A configuration value corresponding to a configuration key.

Usage example:

ie = IECore()
net = ie.read_network(model=path_to_xml_file, weights=path_to_bin_file)
exec_net = ie.load_network(net, "CPU")
config = exec_net.get_config("CPU_BIND_THREAD")
get_exec_graph_info(self)

Gets executable graph information from a device

Returns

An instance of IENetwork

Usage example:

ie_core = IECore()
net = ie_core.read_network(model=path_to_xml_file, weights=path_to_bin_file)
exec_net = ie_core.load_network(net, device, num_requests=2)
exec_graph = exec_net.get_exec_graph_info()
get_idle_request_id(self)

Get idle request ID

Returns

Request index

get_metric(self, unicode metric_name: str)

Gets general runtime metric for an executable network. It can be network name, actual device ID on which executable network is running or all other properties which cannot be changed dynamically.

Parameters

metric_name – A metric name to request.

Returns

A metric value corresponding to a metric key.

Usage example:

ie = IECore()
net = ie.read_network(model=path_to_xml_file, weights=path_to_bin_file)
exec_net = ie.load_network(net, "CPU")
exec_net.get_metric("NETWORK_NAME")
infer(self, inputs=None)

Starts synchronous inference for the first infer request of the executable network and returns output data. Wraps InferRequest.infer() method of the InferRequest class

Parameters

inputs – A dictionary that maps input layer names to numpy.ndarray objects of proper shape with input data for the layer

Returns

A dictionary that maps output layer names to numpy.ndarray objects with output data of the layer

Usage example:

ie_core = IECore()
net = ie_core.read_network(model=path_to_xml_file, weights=path_to_bin_file)
exec_net = ie_core.load_network(network=net, device_name="CPU", num_requests=2)
res = exec_net.infer({'data': img})
res
{'prob': array([[[[2.83426580e-08]],
              [[2.40166020e-08]],
              [[1.29469613e-09]],
              [[2.95946148e-08]]
              ......
             ]])}
input_info

A dictionary that maps input layer names to InputInfoCPtr objects

inputs

A dictionary that maps input layer names to DataPtr objects

Note

The property is deprecated. Please use the input_info property to get the map of inputs

outputs

A dictionary that maps output layer names to CDataPtr objects

requests

A tuple of InferRequest instances

set_config(self, dict config: dict)
start_async(self, request_id, inputs=None)

Starts asynchronous inference for specified infer request. Wraps InferRequest.async_infer() method of the InferRequest class.

Parameters
  • request_id – Index of infer request to start inference

  • inputs – A dictionary that maps input layer names to numpy.ndarray objects of proper shape with input data for the layer

Returns

A handler of specified infer request, which is an instance of the InferRequest class.

Usage example:

infer_request_handle = exec_net.start_async(request_id=0, inputs={input_blob: image})
infer_status = infer_request_handle.wait()
res = infer_request_handle.output_blobs[out_blob_name]
wait(self, num_requests=None, timeout=None)

Waits when the result from any request becomes available. Blocks until specified timeout elapses or the result.

Parameters
  • num_requests – Number of idle requests for which wait. If not specified, num_requests value is set to number of requests by default.

  • timeout – Time to wait in milliseconds or special (0, -1) cases described above. If not specified, timeout value is set to -1 by default.

Returns

Request status code: OK or RESULT_NOT_READY