Public Types | Public Member Functions
InferenceEngine::IInferRequestInternal Interface Referenceabstract

An internal API of synchronous inference request to be implemented by plugin, which is used in InferRequestBase forwarding mechanism. More...

#include <ie_iinfer_request_internal.hpp>

Inheritance diagram for InferenceEngine::IInferRequestInternal:
InferenceEngine::IAsyncInferRequestInternal InferenceEngine::InferRequestInternal InferenceEngine::AsyncInferRequestInternal InferenceEngine::AsyncInferRequestThreadSafeDefault InferenceEngine::AsyncInferRequestInternal

Public Types

typedef std::shared_ptr< IInferRequestInternalPtr
 A shared pointer to a IInferRequestInternal interface.
 

Public Member Functions

virtual ~IInferRequestInternal ()=default
 Destroys the object.
 
virtual void Infer ()=0
 Infers specified input(s) in synchronous mode. More...
 
virtual void Cancel ()=0
 Cancel current inference request execution.
 
virtual std::map< std::string, InferenceEngineProfileInfoGetPerformanceCounts () const =0
 Queries performance measures per layer to get feedback of what is the most time consuming layer. Note: not all plugins may provide meaningful data. More...
 
virtual void SetBlob (const std::string &name, const Blob::Ptr &data)=0
 Set input/output data to infer. More...
 
virtual Blob::Ptr GetBlob (const std::string &name)=0
 Get input/output data to infer. More...
 
virtual void SetBlob (const std::string &name, const Blob::Ptr &data, const PreProcessInfo &info)=0
 Sets pre-process for input data. More...
 
virtual const PreProcessInfoGetPreProcess (const std::string &name) const =0
 Gets pre-process for input data. More...
 
virtual void SetBatch (int batch)=0
 Sets new batch size when dynamic batching is enabled in executable network that created this request. More...
 
virtual std::vector< IVariableStateInternal::PtrQueryState ()=0
 Queries memory states. More...
 

Detailed Description

An internal API of synchronous inference request to be implemented by plugin, which is used in InferRequestBase forwarding mechanism.

Member Function Documentation

◆ GetBlob()

virtual Blob::Ptr InferenceEngine::IInferRequestInternal::GetBlob ( const std::string &  name)
pure virtual

Get input/output data to infer.

Note
Memory allocation doesn't happen
Parameters
name- a name of input or output blob.
Returns
Returns input or output blob. The type of Blob must correspond to the network input precision and size.

Implemented in InferenceEngine::InferRequestInternal, and InferenceEngine::AsyncInferRequestThreadSafeDefault.

◆ GetPerformanceCounts()

virtual std::map<std::string, InferenceEngineProfileInfo> InferenceEngine::IInferRequestInternal::GetPerformanceCounts ( ) const
pure virtual

Queries performance measures per layer to get feedback of what is the most time consuming layer. Note: not all plugins may provide meaningful data.

Returns
Returns a map of layer names to profiling information for that layer.

Implemented in InferenceEngine::AsyncInferRequestThreadSafeDefault.

◆ GetPreProcess()

virtual const PreProcessInfo& InferenceEngine::IInferRequestInternal::GetPreProcess ( const std::string &  name) const
pure virtual

Gets pre-process for input data.

Parameters
nameName of input blob.
Returns
Returns constant reference to PreProcessInfo structure

Implemented in InferenceEngine::InferRequestInternal, and InferenceEngine::AsyncInferRequestThreadSafeDefault.

◆ Infer()

virtual void InferenceEngine::IInferRequestInternal::Infer ( )
pure virtual

Infers specified input(s) in synchronous mode.

Note
blocks all method of IInferRequest while request is ongoing (running or waiting in queue)

Implemented in InferenceEngine::InferRequestInternal, and InferenceEngine::AsyncInferRequestThreadSafeDefault.

◆ QueryState()

virtual std::vector<IVariableStateInternal::Ptr> InferenceEngine::IInferRequestInternal::QueryState ( )
pure virtual

Queries memory states.

Returns
Returns memory states

Implemented in InferenceEngine::InferRequestInternal, and InferenceEngine::AsyncInferRequestThreadSafeDefault.

◆ SetBatch()

virtual void InferenceEngine::IInferRequestInternal::SetBatch ( int  batch)
pure virtual

Sets new batch size when dynamic batching is enabled in executable network that created this request.

Parameters
batch- new batch size to be used by all the following inference calls for this request.

Implemented in InferenceEngine::InferRequestInternal, and InferenceEngine::AsyncInferRequestThreadSafeDefault.

◆ SetBlob() [1/2]

virtual void InferenceEngine::IInferRequestInternal::SetBlob ( const std::string &  name,
const Blob::Ptr data 
)
pure virtual

Set input/output data to infer.

Note
Memory allocation doesn't happen
Parameters
name- a name of input or output blob.
data- a reference to input or output blob. The type of Blob must correspond to the network input precision and size.

Implemented in InferenceEngine::InferRequestInternal, and InferenceEngine::AsyncInferRequestThreadSafeDefault.

◆ SetBlob() [2/2]

virtual void InferenceEngine::IInferRequestInternal::SetBlob ( const std::string &  name,
const Blob::Ptr data,
const PreProcessInfo info 
)
pure virtual

Sets pre-process for input data.

Parameters
nameName of input blob.
data- a reference to input or output blob. The type of Blob must correspond to the network input precision and size.
infoPreprocess info for blob.

Implemented in InferenceEngine::InferRequestInternal, and InferenceEngine::AsyncInferRequestThreadSafeDefault.


The documentation for this interface was generated from the following file: