Overview of Inference Engine Python* API

NOTE: It is a preview version of the Inference Engine Python* API for evaluation purpose only. Module structure and API itself may be changed in future releases.

This API provides a simplified interface for Inference Engine functionality that allows to:

Supported OSes

Currently the Inference Engine Python* API is supported on Ubuntu* 16.04, Microsoft Windows* 10 and CentOS* 7.3 OSes. Supported Python* versions:

Setting Up the Environment

To configure the environment for the Inference Engine Python* API, run:

The script automatically detects latest installed Python* version and configures required environment if the version is supported. If you want to use certain version of Python*, set the environment variable PYTHONPATH=<INSTALL_DIR>/deployment_tools/inference_engine/python_api/<desired_python_version> after running the environment configuration script.

IENetLayer

This class stores main information about the layer and allow to modify some layer parameters

Class attributes:

To correctly set affinity for the network, you must first initialize and properly configure the HETERO plugin. set_config({"TARGET_FALLBACK": "HETERO:FPGA,GPU"}) function configures the plugin fallback devices and their order. plugin.set_initial_affinity(net) function sets affinity parameter of model layers according to its support on specified devices.

After default affinity is set by the plugin, override the default values by setting affinity manually how it's described in example above

To understand how default and non-default affinities are set:

  1. Call net.layers function right after model loading and check that layer affinity parameter is empty.
  2. Call plugin.set_default_affinity(net).
  3. Call net.layers and check layer affinity parameters to see how plugin set a default affinity
  4. Set layer affinity how it's described above
  5. Call net.layers again and check layer affinity parameters to see how it was changed after manual affinity setting

Please refer to affinity_setting_demo.py to see the full usage pipeline.

IENetwork

This class contains the information about the network model read from IR and allows you to manipulate with some model parameters such as layers affinity and output layers.

Class Constructor

Class attributes:

Class Methods

Instance Methods

LayerStats

Layer calibration statistic container.

Class Constructor

InputInfo

This class contains the information about the network input layers

Class attributes:

OutputInfo

This class contains the information about the network input layers

Class attributes:

IEPlugin Class

This class is the main plugin interface and serves to initialize and configure the plugin.

Class Constructor

Properties

Instance Methods

Return value: None

set_config(config: dict)

get_supported_layers(net: IENetwork)

ExecutableNetwork Class

This class represents a network instance loaded to plugin and ready for inference.

Class Constructor

There is no explicit class constructor. To make a valid instance of ExecutableNetwork, use load() method of the IEPlugin class.

Class attributes

Instance Methods

For more details about infer requests processing, see classification_sample_async.py (simplified case) and object_detection_demo_ssd_async.py (real asynchronous use case) samples.

InferRequest Class

This class provides an interface to infer requests of ExecutableNetwork and serves to handle infer requests execution and to set and get output data.

Class Constructor

There is no explicit class constructor. To make a valid InferRequest instance, use load() method of the IEPlugin class with specified number of requests to get ExecutableNetwork instance which stores infer requests.

Class attributes

Instance Methods

It is not recommended to run inference directly on InferRequest instance. To run inference, please use simplified methods infer() and start_async() of ExecutableNetwork.