Public Types | Public Member Functions
InferenceEngine::InferRequest Class Reference

#include <ie_infer_request.hpp>

Public Types

using Ptr = std::shared_ptr< InferRequest >
 A smart pointer to the InferRequest object.
 

Public Member Functions

 InferRequest ()=default
 Default constructor.
 
 ~InferRequest ()
 Destructor.
 
void SetBlob (const std::string &name, const Blob::Ptr &data)
 Sets input/output data to infer. More...
 
Blob::Ptr GetBlob (const std::string &name)
 
void SetBlob (const std::string &name, const Blob::Ptr &data, const PreProcessInfo &info)
 Sets blob with a pre-process information. More...
 
const PreProcessInfoGetPreProcess (const std::string &name) const
 Gets pre-process for input data. More...
 
void Infer ()
 
std::map< std::string, InferenceEngineProfileInfoGetPerformanceCounts () const
 
void SetInput (const BlobMap &inputs)
 Sets input data to infer. More...
 
void SetOutput (const BlobMap &results)
 Sets data that will contain result of the inference. More...
 
void SetBatch (const int batch)
 Sets new batch size when dynamic batching is enabled in executable network that created this request. More...
 
 InferRequest (IInferRequest::Ptr request, InferenceEnginePluginPtr plg={})
 
void StartAsync ()
 Start inference of specified input(s) in asynchronous mode. More...
 
StatusCode Wait (int64_t millis_timeout)
 
template<class T >
void SetCompletionCallback (const T &callbackToSet)
 
 operator IInferRequest::Ptr & ()
 IInferRequest pointer to be used directly in CreateInferRequest functions. More...
 
bool operator! () const noexcept
 Checks if current InferRequest object is not initialized. More...
 
 operator bool () const noexcept
 Checks if current InferRequest object is initialized. More...
 

Detailed Description

This is an interface of asynchronous infer request.

Wraps IInferRequest It can throw exceptions safely for the application, where it is properly handled.

Constructor & Destructor Documentation

§ InferRequest()

InferenceEngine::InferRequest::InferRequest ( IInferRequest::Ptr  request,
InferenceEnginePluginPtr  plg = {} 
)
inlineexplicit

constructs InferRequest from the initialized shared_pointer

Parameters
requestInitialized shared pointer to IInferRequest interface
plgPlugin to use. This is required to ensure that InferRequest can work properly even if plugin object is destroyed.

Member Function Documentation

§ GetBlob()

Blob::Ptr InferenceEngine::InferRequest::GetBlob ( const std::string &  name)
inline

Gets input/output data for inference.

Wraps IInferRequest::GetBlob

Parameters
nameA name of Blob to get
Returns
A shared pointer to a Blob with a name name. If a blob is not found, an exception is thrown.

§ GetPerformanceCounts()

std::map<std::string, InferenceEngineProfileInfo> InferenceEngine::InferRequest::GetPerformanceCounts ( ) const
inline

Queries performance measures per layer to get feedback of what is the most time consuming layer.

Wraps IInferRequest::GetPerformanceCounts

Returns
Map of layer names to profiling information for that layer

§ GetPreProcess()

const PreProcessInfo& InferenceEngine::InferRequest::GetPreProcess ( const std::string &  name) const
inline

Gets pre-process for input data.

Parameters
nameName of input blob.
Returns
pointer to pre-process info of blob with name

§ Infer()

void InferenceEngine::InferRequest::Infer ( )
inline

Infers specified input(s) in synchronous mode.

Note
blocks all methods of InferRequest while request is ongoing (running or waiting in queue)

Wraps IInferRequest::Infer

§ operator bool()

InferenceEngine::InferRequest::operator bool ( ) const
inlineexplicitnoexcept

Checks if current InferRequest object is initialized.

Returns
true if current InferRequest object is initialized, false - otherwise

§ operator IInferRequest::Ptr &()

InferenceEngine::InferRequest::operator IInferRequest::Ptr & ( )
inline

IInferRequest pointer to be used directly in CreateInferRequest functions.

Returns
A shared pointer to underlying IInferRequest interface

§ operator!()

bool InferenceEngine::InferRequest::operator! ( ) const
inlinenoexcept

Checks if current InferRequest object is not initialized.

Returns
true if current InferRequest object is not initialized, false - otherwise

§ SetBatch()

void InferenceEngine::InferRequest::SetBatch ( const int  batch)
inline

Sets new batch size when dynamic batching is enabled in executable network that created this request.

Parameters
batchnew batch size to be used by all the following inference calls for this request.

§ SetBlob() [1/2]

void InferenceEngine::InferRequest::SetBlob ( const std::string &  name,
const Blob::Ptr data 
)
inline

Sets input/output data to infer.

Note
Memory allocation does not happen
Parameters
nameName of input or output blob.
dataReference to input or output blob. The type of a blob must match the network input precision and size.

§ SetBlob() [2/2]

void InferenceEngine::InferRequest::SetBlob ( const std::string &  name,
const Blob::Ptr data,
const PreProcessInfo info 
)
inline

Sets blob with a pre-process information.

Note
Returns an error in case if data blob is output
Parameters
nameName of input blob.
dataA reference to input. The type of Blob must correspond to the network input precision and size.
infoPreprocess info for blob.

§ SetCompletionCallback()

template<class T >
void InferenceEngine::InferRequest::SetCompletionCallback ( const T &  callbackToSet)
inline

Sets a callback function that will be called on success or failure of asynchronous request.

Wraps IInferRequest::SetCompletionCallback

Parameters
callbackToSetLambda callback object which will be called on processing finish.

§ SetInput()

void InferenceEngine::InferRequest::SetInput ( const BlobMap inputs)
inline

Sets input data to infer.

Note
Memory allocation doesn't happen
Parameters
inputsA reference to a map of input blobs accessed by input names. The type of Blob must correspond to the network input precision and size.

§ SetOutput()

void InferenceEngine::InferRequest::SetOutput ( const BlobMap results)
inline

Sets data that will contain result of the inference.

Note
Memory allocation doesn't happen
Parameters
results- a reference to a map of result blobs accessed by output names. The type of Blob must correspond to the network output precision and size.

§ StartAsync()

void InferenceEngine::InferRequest::StartAsync ( )
inline

Start inference of specified input(s) in asynchronous mode.

Note
It returns immediately. Inference starts also immediately.

§ Wait()

StatusCode InferenceEngine::InferRequest::Wait ( int64_t  millis_timeout)
inline

Waits for the result to become available. Blocks until specified millis_timeout has elapsed or the result becomes available, whichever comes first.

Wraps IInferRequest::Wait

Parameters
millis_timeoutMaximum duration in milliseconds to block for
Note
There are special cases when millis_timeout is equal some value of the WaitMode enum:
  • STATUS_ONLY - immediately returns inference status (IInferRequest::RequestStatus). It does not block or interrupt current thread
  • RESULT_READY - waits until inference result becomes available
Returns
A status code of operation

The documentation for this class was generated from the following file: