Public Types | Public Member Functions
InferenceEngine::InferRequest Class Reference

This class is a wrapper of IInferRequest to provide setters/getters of input/output which operates with BlobMaps. It can throw exceptions safely for the application, where it is properly handled. More...

#include <ie_infer_request.hpp>

Public Types

using  Ptr = std::shared_ptr< InferRequest >
 

Public Member Functions

void  SetBlob (const std::string &name, const Blob::Ptr &data)
  Sets input/output data to infer. More...
 
Blob::Ptr  GetBlob (const std::string &name)
  Wraps original method IInferRequest::GetBlob.
 
void  Infer ()
  Wraps original method IInferRequest::Infer.
 
std::map< std::string, InferenceEngineProfileInfo GetPerformanceCounts () const
  Wraps original method IInferRequest::GetPerformanceCounts.
 
void  SetInput (const BlobMap &inputs)
  Sets input data to infer. More...
 
void  SetOutput (const BlobMap &results)
  Sets data that will contain result of the inference. More...
 
void  SetBatch (const int batch)
  Sets new batch size when dynamic batching is enabled in executable network that created this request. More...
 
  InferRequest (IInferRequest::Ptr request)
 
void  StartAsync ()
  Start inference of specified input(s) in asynchronous mode. More...
 
StatusCode  Wait (int64_t millis_timeout)
  Wraps original method IInferRequest::Wait.
 
template<class T >
void  SetCompletionCallback (const T &callbackToSet)
  Wraps original method IInferRequest::SetCompletionCallback. More...
 
  operator IInferRequest::Ptr & ()
  IInferRequest pointer to be used directly in CreateInferRequest functions.
 
bool  operator! () const noexcept
 
  operator bool () const noexcept
 

Detailed Description

This class is a wrapper of IInferRequest to provide setters/getters of input/output which operates with BlobMaps. It can throw exceptions safely for the application, where it is properly handled.

Constructor & Destructor Documentation

§ InferRequest()

InferenceEngine::InferRequest::InferRequest ( IInferRequest::Ptr  request )
inlineexplicit

constructs InferRequest from initialised shared_pointer

Parameters
actual

Member Function Documentation

§ SetBatch()

void InferenceEngine::InferRequest::SetBatch ( const int  batch )
inline

Sets new batch size when dynamic batching is enabled in executable network that created this request.

Parameters
batch new batch size to be used by all the following inference calls for this request.

§ SetBlob()

void InferenceEngine::InferRequest::SetBlob ( const std::string &  name,
const Blob::Ptr data 
)
inline

Sets input/output data to infer.

Note
: Memory allocation does not happen
Parameters
name Name of input or output blob.
data Reference to input or output blob. The type of a blob must match the network input precision and size.

§ SetCompletionCallback()

template<class T >
void InferenceEngine::InferRequest::SetCompletionCallback ( const T &  callbackToSet )
inline

Wraps original method IInferRequest::SetCompletionCallback.

Parameters
callbackToSet Lambda callback object which will be called on processing finish.

§ SetInput()

void InferenceEngine::InferRequest::SetInput ( const BlobMap inputs )
inline

Sets input data to infer.

Note
: Memory allocation doesn't happen
Parameters
inputs - a reference to a map of input blobs accessed by input names. The type of Blob must correspond to the network input precision and size.

§ SetOutput()

void InferenceEngine::InferRequest::SetOutput ( const BlobMap results )
inline

Sets data that will contain result of the inference.

Note
: Memory allocation doesn't happen
Parameters
results - a reference to a map of result blobs accessed by output names. The type of Blob must correspond to the network output precision and size.

§ StartAsync()

void InferenceEngine::InferRequest::StartAsync ( )
inline

Start inference of specified input(s) in asynchronous mode.

Note
: It returns immediately. Inference starts also immediately.

The documentation for this class was generated from the following file: