Overview of Inference Engine Python* API

NOTE: It is a preview version of the Inference Engine Python* API for evaluation purpose only. Module structure and API itself may be changed in future releases.

This API provides a simplified interface for Inference Engine functionality that allows to:

Supported OSes

Currently the Inference Engine Python* API is supported on Ubuntu* 16.04 and 18.04, Windows* 10, macOS* 10.x and CentOS* 7.3 OSes. Supported Python* versions:

Operating System Supported Python* versions:
Ubuntu* 16.04 2.7, 3.5, 3.6, 3.7
Ubuntu* 18.04 2.7, 3.5, 3.6, 3.7
Windows* 10 3.5, 3.6, 3.7
CentOS* 7.3 3.4, 3.5, 3.6, 3.7
macOS* 10.x 3.5, 3.6, 3.7

Setting Up the Environment

To configure the environment for the Inference Engine Python* API, run:

The script automatically detects latest installed Python* version and configures required environment if the version is supported. If you want to use certain version of Python*, set the environment variable PYTHONPATH=<INSTALL_DIR>/deployment_tools/inference_engine/python_api/<desired_python_version> after running the environment configuration script.

IECore

This class represents an Inference Engine entity and allows you to manipulate with plugins using unified interfaces.

Class Constructor

__init__(xml_config_file: str = "")

Class Attributes

Instance Methods

IENetLayer

This class stores main information about the layer and allow to modify some layer parameters

Class Attributes

IENetwork

This class contains the information about the network model read from IR and allows you to manipulate with some model parameters such as layers affinity and output layers.

Class Constructor

__init__(model: [bytes, str], weights: [bytes, str], init_from_buffer: bool=False, ngrpah_compatibility: bool=False)

Class Attributes

Class Methods

Instance Methods

LayerStats

Layer calibration statistic container.

Class Constructor

InputInfo

This class contains the information about the network input layers

Class Attributes

OutputInfo

This class contains the information about the network input layers

Class Attributes

IEPlugin Class

This class is the main plugin interface and serves to initialize and configure the plugin.

Class Constructor

Properties

Instance Methods

Return value: None

set_config(config: dict)

get_supported_layers(net: IENetwork)

ExecutableNetwork Class

This class represents a network instance loaded to plugin and ready for inference.

Class Constructor

There is no explicit class constructor. To make a valid instance of ExecutableNetwork, use load() method of the IEPlugin class.

Class Attributes

Instance Methods

For more details about infer requests processing, see classification_sample_async.py (simplified case) and object_detection_demo_ssd_async.py (real asynchronous use case) samples.

net = IENetwork(model=path_to_xml_file, weights=path_to_bin_file)
plugin = IEPlugin(device="CPU")
exec_net = plugin.load(network=net, num_requsts=2)
exec_graph = exec_net.get_exec_graph_info()

InferRequest Class

This class provides an interface to infer requests of ExecutableNetwork and serves to handle infer requests execution and to set and get output data.

Class Constructor

There is no explicit class constructor. To make a valid InferRequest instance, use load() method of the IEPlugin class with specified number of requests to get ExecutableNetwork instance which stores infer requests.

Class Attributes

Instance Methods

It is not recommended to run inference directly on InferRequest instance. To run inference, please use simplified methods infer() and start_async() of ExecutableNetwork.

callback = lambda status, py_data: print("Request with id {} finished with status {}".format(py_data, status))
net = IENetwork("./model.xml", "./model.bin")
ie = IECore()
exec_net = ie.load_network(net, "CPU", num_requests=4)
for id, req in enumerate(exec_net.requests):
req.set_completion_callback(py_callback=callback, py_data=id)
for req in exec_net.requests:
req.async_infer({"data": img})