Public Types | Public Member Functions
InferenceEngine::IAsyncInferRequestInternal Interface Referenceabstract

An internal API of asynchronous inference request to be implemented by plugin, which is used in InferRequestBase forwarding mechanism. More...

#include <ie_iinfer_async_request_internal.hpp>

Inheritance diagram for InferenceEngine::IAsyncInferRequestInternal:
InferenceEngine::IInferRequestInternal InferenceEngine::AsyncInferRequestInternal InferenceEngine::AsyncInferRequestThreadSafeDefault

Public Types

typedef std::shared_ptr< IAsyncInferRequestInternalPtr
 A shared pointer to IAsyncInferRequestInternal interface.
 
- Public Types inherited from InferenceEngine::IInferRequestInternal
typedef std::shared_ptr< IInferRequestInternalPtr
 A shared pointer to a IInferRequestInternal interface.
 

Public Member Functions

virtual ~IAsyncInferRequestInternal ()=default
 A virtual destructor.
 
virtual void StartAsync ()=0
 Start inference of specified input(s) in asynchronous mode. More...
 
virtual StatusCode Wait (int64_t millis_timeout)=0
 Waits for the result to become available. Blocks until specified millis_timeout has elapsed or the result becomes available, whichever comes first. More...
 
virtual void GetUserData (void **data)=0
 Get arbitrary data for the request. More...
 
virtual void SetUserData (void *data)=0
 Set arbitrary data for the request. More...
 
virtual void SetCompletionCallback (IInferRequest::CompletionCallback callback)=0
 Set callback function which will be called on success or failure of asynchronous request. More...
 
- Public Member Functions inherited from InferenceEngine::IInferRequestInternal
virtual ~IInferRequestInternal ()=default
 Destroys the object.
 
virtual void Infer ()=0
 Infers specified input(s) in synchronous mode. More...
 
virtual void Cancel ()=0
 Cancel current inference request execution.
 
virtual std::map< std::string, InferenceEngineProfileInfoGetPerformanceCounts () const =0
 Queries performance measures per layer to get feedback of what is the most time consuming layer. Note: not all plugins may provide meaningful data. More...
 
virtual void SetBlob (const std::string &name, const Blob::Ptr &data)=0
 Set input/output data to infer. More...
 
virtual Blob::Ptr GetBlob (const std::string &name)=0
 Get input/output data to infer. More...
 
virtual void SetBlob (const std::string &name, const Blob::Ptr &data, const PreProcessInfo &info)=0
 Sets pre-process for input data. More...
 
virtual const PreProcessInfoGetPreProcess (const std::string &name) const =0
 Gets pre-process for input data. More...
 
virtual void SetBatch (int batch)=0
 Sets new batch size when dynamic batching is enabled in executable network that created this request. More...
 
virtual std::vector< IVariableStateInternal::PtrQueryState ()=0
 Queries memory states. More...
 

Detailed Description

An internal API of asynchronous inference request to be implemented by plugin, which is used in InferRequestBase forwarding mechanism.

Member Function Documentation

◆ GetUserData()

virtual void InferenceEngine::IAsyncInferRequestInternal::GetUserData ( void **  data)
pure virtual

Get arbitrary data for the request.

Parameters
dataA pointer to a pointer to arbitrary data

Implemented in InferenceEngine::AsyncInferRequestThreadSafeDefault, and InferenceEngine::AsyncInferRequestInternal.

◆ SetCompletionCallback()

virtual void InferenceEngine::IAsyncInferRequestInternal::SetCompletionCallback ( IInferRequest::CompletionCallback  callback)
pure virtual

Set callback function which will be called on success or failure of asynchronous request.

Parameters
callback- function to be called with the following description:

Implemented in InferenceEngine::AsyncInferRequestThreadSafeDefault, and InferenceEngine::AsyncInferRequestInternal.

◆ SetUserData()

virtual void InferenceEngine::IAsyncInferRequestInternal::SetUserData ( void *  data)
pure virtual

Set arbitrary data for the request.

Parameters
dataA pointer to a pointer to arbitrary data

Implemented in InferenceEngine::AsyncInferRequestThreadSafeDefault, and InferenceEngine::AsyncInferRequestInternal.

◆ StartAsync()

virtual void InferenceEngine::IAsyncInferRequestInternal::StartAsync ( )
pure virtual

Start inference of specified input(s) in asynchronous mode.

Note
The method returns immediately. Inference starts also immediately.

Implemented in InferenceEngine::AsyncInferRequestThreadSafeDefault, and InferenceEngine::AsyncInferRequestInternal.

◆ Wait()

virtual StatusCode InferenceEngine::IAsyncInferRequestInternal::Wait ( int64_t  millis_timeout)
pure virtual

Waits for the result to become available. Blocks until specified millis_timeout has elapsed or the result becomes available, whichever comes first.

Parameters
millis_timeout- maximum duration in milliseconds to block for
Note
There are special cases when millis_timeout is equal some value of WaitMode enum:
  • STATUS_ONLY - immediately returns request status (IInferRequest::RequestStatus). It doesn't block or interrupt current thread.
  • RESULT_READY - waits until inference result becomes available
Returns
A status code

Implemented in InferenceEngine::AsyncInferRequestThreadSafeDefault.


The documentation for this interface was generated from the following file: