Class InferenceEngine::IInferRequest¶
-
class IInferRequest : public std::enable_shared_from_this<IInferRequest>¶
This is an interface of asynchronous infer request.
- Deprecated:
Use InferenceEngine::InferRequest C++ wrapper
Public Types
-
enum WaitMode¶
Enumeration to hold wait mode for IInferRequest.
Values:
-
enumerator RESULT_READY¶
Wait until inference result becomes available
-
enumerator STATUS_ONLY¶
IInferRequest doesn’t block or interrupt current thread and immediately returns inference status
-
enumerator RESULT_READY¶
-
using Ptr = std::shared_ptr<IInferRequest>¶
A shared pointer to the IInferRequest object.
-
using WeakPtr = std::weak_ptr<IInferRequest>¶
A smart pointer to the IInferRequest object.
-
typedef void (*CompletionCallback)(InferenceEngine::IInferRequest::Ptr context, InferenceEngine::StatusCode code)¶
Completion callback definition as pointer to a function.
- Param context
Pointer to request for providing context inside callback
- Param code
Completion result status: InferenceEngine::OK (0) for success
Public Functions
-
virtual StatusCode SetBlob(const char *name, const Blob::Ptr &data, ResponseDesc *resp) noexcept = 0¶
Sets input/output data to infer.
Note
Memory allocation does not happen
- Parameters
name – Name of input or output blob.
data – Reference to input or output blob. The type of a blob must match the network input precision and size.
resp – Optional: pointer to an already allocated object to contain information in case of failure
- Returns
Status code of the operation: InferenceEngine::OK (0) for success
-
virtual StatusCode GetBlob(const char *name, Blob::Ptr &data, ResponseDesc *resp) noexcept = 0¶
Gets input/output data for inference.
Note
Memory allocation does not happen
- Parameters
name – Name of input or output blob.
data – Reference to input or output blob. The type of Blob must match the network input precision and size.
resp – Optional: pointer to an already allocated object to contain information in case of failure
- Returns
Status code of the operation: InferenceEngine::OK (0) for success
-
virtual StatusCode GetPreProcess(const char *name, const PreProcessInfo **info, ResponseDesc *resp) const noexcept = 0¶
Gets pre-process for input data.
- Parameters
name – Name of input blob.
info – pointer to a pointer to PreProcessInfo structure
resp – Optional: pointer to an already allocated object to contain information in case of failure
- Returns
Status code of the operation: OK (0) for success
-
virtual StatusCode Infer(ResponseDesc *resp) noexcept = 0¶
Infers specified input(s) in synchronous mode.
Note
blocks all methods of IInferRequest while request is ongoing (running or waiting in queue)
- Parameters
resp – Optional: pointer to an already allocated object to contain information in case of failure
- Returns
Status code of the operation: InferenceEngine::OK (0) for success
-
virtual StatusCode Cancel(ResponseDesc *resp) noexcept = 0¶
Cancels current async inference request.
- Parameters
resp – Optional: pointer to an already allocated object to contain information in case of failure
- Returns
Status code of the operation: InferenceEngine::OK (0) for success
-
virtual StatusCode GetPerformanceCounts(std::map<std::string, InferenceEngineProfileInfo> &perfMap, ResponseDesc *resp) const noexcept = 0¶
Queries performance measures per layer to get feedback of what is the most time consuming layer.
Note
not all plugins provide meaningful data
- Parameters
perfMap – Map of layer names to profiling information for that layer
resp – Optional: pointer to an already allocated object to contain information in case of failure
- Returns
Status code of the operation: InferenceEngine::OK (0) for success
-
virtual InferenceEngine::StatusCode Wait(int64_t millis_timeout, ResponseDesc *resp) noexcept = 0¶
Waits for the result to become available. Blocks until specified millis_timeout has elapsed or the result becomes available, whichever comes first.
Note
There are special cases when millis_timeout is equal some value of the WaitMode enum:
STATUS_ONLY - immediately returns inference status (IInferRequest::RequestStatus). It does not block or interrupt current thread
RESULT_READY - waits until inference result becomes available
- Parameters
millis_timeout – Maximum duration in milliseconds to block for
resp – Optional: a pointer to an already allocated object to contain extra information of a failure (if occurred)
- Returns
Enumeration of the resulted action: InferenceEngine::OK (0) for success
-
virtual StatusCode StartAsync(ResponseDesc *resp) noexcept = 0¶
Starts inference of specified input(s) in asynchronous mode.
Note
It returns immediately. Inference starts also immediately
- Parameters
resp – Optional: a pointer to an already allocated object to contain extra information of a failure (if occurred)
- Returns
Enumeration of the resulted action: InferenceEngine::OK (0) for success
-
virtual StatusCode SetCompletionCallback(CompletionCallback callback) noexcept = 0¶
Sets a callback function that will be called on success or failure of asynchronous request.
- Parameters
callback – A function to be called
- Returns
Enumeration of the resulted action: InferenceEngine::OK (0) for success
-
virtual StatusCode GetUserData(void **data, ResponseDesc *resp) noexcept = 0¶
Gets arbitrary data for the request and stores a pointer to a pointer to the obtained data.
- Parameters
data – Pointer to a pointer to the gotten arbitrary data
resp – Optional: a pointer to an already allocated object to contain extra information of a failure (if occurred)
- Returns
Enumeration of the resulted action: InferenceEngine::OK (0) for success
-
virtual StatusCode SetUserData(void *data, ResponseDesc *resp) noexcept = 0¶
Sets arbitrary data for the request.
- Parameters
data – Pointer to a pointer to arbitrary data to set
resp – Optional: a pointer to an already allocated object to contain extra information of a failure (if occurred)
- Returns
Enumeration of the resulted action: InferenceEngine::OK (0) for success