This class is a wrapper of IInferRequest to provide setters/getters of input/output which operates with BlobMaps. It can throw exceptions safely for the application, where it is properly handled.
More...
#include <ie_infer_request.hpp>
This class is a wrapper of IInferRequest to provide setters/getters of input/output which operates with BlobMaps. It can throw exceptions safely for the application, where it is properly handled.
§ InferRequest()
constructs InferRequest from the initialized shared_pointer
- Parameters
-
request |
Initialized shared pointer |
plg |
Plugin to use |
§ operator bool()
InferenceEngine::InferRequest::operator bool |
( |
|
) |
const |
|
inlineexplicitnoexcept |
Checks if current InferRequest object is initialized.
- Returns
- true if current InferRequest object is initialized, false - otherwise
§ operator!()
bool InferenceEngine::InferRequest::operator! |
( |
|
) |
const |
|
inlinenoexcept |
Checks if current InferRequest object is not initialized.
- Returns
- true if current InferRequest object is not initialized, false - otherwise
§ SetBatch()
void InferenceEngine::InferRequest::SetBatch |
( |
const int |
batch |
) |
|
|
inline |
Sets new batch size when dynamic batching is enabled in executable network that created this request.
- Parameters
-
batch |
new batch size to be used by all the following inference calls for this request. |
§ SetBlob()
void InferenceEngine::InferRequest::SetBlob |
( |
const std::string & |
name, |
|
|
const Blob::Ptr & |
data |
|
) |
|
|
|
inline |
Sets input/output data to infer.
- Note
- : Memory allocation does not happen
- Parameters
-
name |
Name of input or output blob. |
data |
Reference to input or output blob. The type of a blob must match the network input precision and size. |
§ SetCompletionCallback()
template<class T >
void InferenceEngine::InferRequest::SetCompletionCallback |
( |
const T & |
callbackToSet |
) |
|
|
inline |
§ SetInput()
void InferenceEngine::InferRequest::SetInput |
( |
const BlobMap & |
inputs |
) |
|
|
inline |
Sets input data to infer.
- Note
- : Memory allocation doesn't happen
- Parameters
-
inputs |
- a reference to a map of input blobs accessed by input names. The type of Blob must correspond to the network input precision and size. |
§ SetOutput()
void InferenceEngine::InferRequest::SetOutput |
( |
const BlobMap & |
results |
) |
|
|
inline |
Sets data that will contain result of the inference.
- Note
- : Memory allocation doesn't happen
- Parameters
-
results |
- a reference to a map of result blobs accessed by output names. The type of Blob must correspond to the network output precision and size. |
§ StartAsync()
void InferenceEngine::InferRequest::StartAsync |
( |
|
) |
|
|
inline |
Start inference of specified input(s) in asynchronous mode.
- Note
- : It returns immediately. Inference starts also immediately.
The documentation for this class was generated from the following file: