This class provides an interface to infer requests of ExecutableNetwork and serves to handle infer requests execution and to set and get output data.
More...
Public Member Functions | |
| def | __init__ (self) |
| There is no explicit class constructor. More... | |
| def | set_completion_callback (self, py_callback, py_data=None) |
| Description: Sets a callback function that is called on success or failure of an asynchronous request. More... | |
| def | input_blobs (self) |
| Dictionary that maps input layer names to corresponding Blobs. | |
| def | output_blobs (self) |
| Dictionary that maps output layer names to corresponding Blobs. | |
| def | preprocess_info (self) |
| Dictionary that maps input layer names to corresponding preprocessing information. | |
| def | query_state (self) |
| Gets state control interface for given infer request State control essential for recurrent networks. More... | |
| def | set_blob (self, blob_name, blob, preprocess_info) |
| Sets user defined Blob for the infer request. More... | |
| def | infer (self, inputs=None) |
| Starts synchronous inference of the infer request and fill outputs array. More... | |
| def | async_infer (self, inputs=None) |
| Starts asynchronous inference of the infer request and fill outputs array. More... | |
| def | wait (self, timeout=None) |
| Waits for the result to become available. More... | |
| def | get_perf_counts (self) |
| Queries performance measures per layer to get feedback of what is the most time consuming layer. More... | |
| def | inputs (self) |
A dictionary that maps input layer names to numpy.ndarray objects of proper shape with input data for the layer. | |
| def | outputs (self) |
A dictionary that maps output layer names to numpy.ndarray objects with output data of the layer. | |
| def | latency (self) |
| Current infer request inference time in milliseconds. | |
| def | set_batch (self, size) |
| Sets new batch size for certain infer request when dynamic batching is enabled in executable network that created this request. More... | |
Data Fields | |
| input_blobs | |
| Dictionary that maps input layer names to corresponding Blobs. | |
| output_blobs | |
| Dictionary that maps output layer names to corresponding Blobs. | |
| preprocess_info | |
| Dictionary that maps input layer names to corresponding preprocessing information. | |
| inputs | |
A dictionary that maps input layer names to numpy.ndarray objects of proper shape with input data for the layer. | |
| outputs | |
A dictionary that maps output layer names to numpy.ndarray objects with output data of the layer. | |
| latency | |
| Current infer request inference time in milliseconds. | |
This class provides an interface to infer requests of ExecutableNetwork and serves to handle infer requests execution and to set and get output data.
| def ie_api.InferRequest.__init__ | ( | self | ) |
There is no explicit class constructor.
To make a valid InferRequest instance, use load_network() method of the IECore class with specified number of requests to get ExecutableNetwork instance which stores infer requests.
| def ie_api.InferRequest.async_infer | ( | self, | |
inputs = None |
|||
| ) |
Starts asynchronous inference of the infer request and fill outputs array.
| inputs | A dictionary that maps input layer names to numpy.ndarray objects of proper shape with input data for the layer |
Usage example:
| def ie_api.InferRequest.get_perf_counts | ( | self | ) |
Queries performance measures per layer to get feedback of what is the most time consuming layer.
Usage example:
| def ie_api.InferRequest.infer | ( | self, | |
inputs = None |
|||
| ) |
Starts synchronous inference of the infer request and fill outputs array.
| inputs | A dictionary that maps input layer names to numpy.ndarray objects of proper shape with input data for the layer |
Usage example:
| def ie_api.InferRequest.query_state | ( | self | ) |
Gets state control interface for given infer request State control essential for recurrent networks.
| def ie_api.InferRequest.set_batch | ( | self, | |
| size | |||
| ) |
Sets new batch size for certain infer request when dynamic batching is enabled in executable network that created this request.
| size | New batch size to be used by all the following inference calls for this request |
Usage example:
| def ie_api.InferRequest.set_blob | ( | self, | |
| blob_name, | |||
| blob, | |||
| preprocess_info | |||
| ) |
Sets user defined Blob for the infer request.
| blob_name | A name of input blob |
| blob | Blob object to set for the infer request |
| preprocess_info | PreProcessInfo object to set for the infer request. |
Usage example:
| def ie_api.InferRequest.set_completion_callback | ( | self, | |
| py_callback, | |||
py_data = None |
|||
| ) |
Description: Sets a callback function that is called on success or failure of an asynchronous request.
| py_callback | - Any defined or lambda function |
| py_data | - Data that is passed to the callback function |
Usage example:
| def ie_api.InferRequest.wait | ( | self, | |
timeout = None |
|||
| ) |
Waits for the result to become available.
Blocks until specified timeout elapses or the result becomes available, whichever comes first.
| timeout | Time to wait in milliseconds or special (0, -1) cases described above. If not specified, timeout value is set to -1 by default. |
Usage example: See async_infer() method of the the InferRequest class.