Public Types | Public Member Functions
InferenceEngine::IInferRequestInternal Interface Referenceabstract

An internal API of synchronous inference request to be implemented by plugin, which is used in InferRequestBase forwarding mechanism. More...

#include <ie_iinfer_request_internal.hpp>

Inheritance diagram for InferenceEngine::IInferRequestInternal:
InferenceEngine::IAsyncInferRequestInternal InferenceEngine::InferRequestInternal InferenceEngine::AsyncInferRequestInternal InferenceEngine::AsyncInferRequestThreadSafeInternal InferenceEngine::AsyncInferRequestInternal InferenceEngine::AsyncInferRequestThreadSafeDefault

Public Types

typedef std::shared_ptr< IInferRequestInternalPtr
 A shared pointer to a IInferRequestInternal interface.
 

Public Member Functions

virtual ~IInferRequestInternal ()=default
 Destroys the object.
 
virtual void Infer ()=0
 Infers specified input(s) in synchronous mode. More...
 
virtual void GetPerformanceCounts (std::map< std::string, InferenceEngineProfileInfo > &perfMap) const =0
 Queries performance measures per layer to get feedback of what is the most time consuming layer. Note: not all plugins may provide meaningful data. More...
 
virtual void SetBlob (const char *name, const Blob::Ptr &data)=0
 Set input/output data to infer. More...
 
virtual void GetBlob (const char *name, Blob::Ptr &data)=0
 Get input/output data to infer. More...
 
virtual void SetBlob (const char *name, const Blob::Ptr &data, const PreProcessInfo &info)=0
 Sets pre-process for input data. More...
 
virtual void GetPreProcess (const char *name, const PreProcessInfo **info) const =0
 Gets pre-process for input data. More...
 
virtual void SetBatch (int batch)=0
 Sets new batch size when dynamic batching is enabled in executable network that created this request. More...
 

Detailed Description

An internal API of synchronous inference request to be implemented by plugin, which is used in InferRequestBase forwarding mechanism.

Member Function Documentation

◆ GetBlob()

virtual void InferenceEngine::IInferRequestInternal::GetBlob ( const char *  name,
Blob::Ptr data 
)
pure virtual

Get input/output data to infer.

Note
Memory allocation doesn't happen
Parameters
name- a name of input or output blob.
data- a reference to input or output blob. The type of Blob must correspond to the network input precision and size.

Implemented in InferenceEngine::InferRequestInternal, and InferenceEngine::AsyncInferRequestThreadSafeInternal.

◆ GetPerformanceCounts()

virtual void InferenceEngine::IInferRequestInternal::GetPerformanceCounts ( std::map< std::string, InferenceEngineProfileInfo > &  perfMap) const
pure virtual

Queries performance measures per layer to get feedback of what is the most time consuming layer. Note: not all plugins may provide meaningful data.

Parameters
perfMap- a map of layer names to profiling information for that layer.

Implemented in InferenceEngine::AsyncInferRequestThreadSafeInternal.

◆ GetPreProcess()

virtual void InferenceEngine::IInferRequestInternal::GetPreProcess ( const char *  name,
const PreProcessInfo **  info 
) const
pure virtual

Gets pre-process for input data.

Parameters
nameName of input blob.
infopointer to a pointer to PreProcessInfo structure

Implemented in InferenceEngine::InferRequestInternal, and InferenceEngine::AsyncInferRequestThreadSafeInternal.

◆ Infer()

virtual void InferenceEngine::IInferRequestInternal::Infer ( )
pure virtual

Infers specified input(s) in synchronous mode.

Note
blocks all method of IInferRequest while request is ongoing (running or waiting in queue)

Implemented in InferenceEngine::InferRequestInternal, and InferenceEngine::AsyncInferRequestThreadSafeInternal.

◆ SetBatch()

virtual void InferenceEngine::IInferRequestInternal::SetBatch ( int  batch)
pure virtual

Sets new batch size when dynamic batching is enabled in executable network that created this request.

Parameters
batch- new batch size to be used by all the following inference calls for this request.

Implemented in InferenceEngine::InferRequestInternal, and InferenceEngine::AsyncInferRequestThreadSafeInternal.

◆ SetBlob() [1/2]

virtual void InferenceEngine::IInferRequestInternal::SetBlob ( const char *  name,
const Blob::Ptr data 
)
pure virtual

Set input/output data to infer.

Note
Memory allocation doesn't happen
Parameters
name- a name of input or output blob.
data- a reference to input or output blob. The type of Blob must correspond to the network input precision and size.

Implemented in InferenceEngine::InferRequestInternal, and InferenceEngine::AsyncInferRequestThreadSafeInternal.

◆ SetBlob() [2/2]

virtual void InferenceEngine::IInferRequestInternal::SetBlob ( const char *  name,
const Blob::Ptr data,
const PreProcessInfo info 
)
pure virtual

Sets pre-process for input data.

Parameters
nameName of input blob.
data- a reference to input or output blob. The type of Blob must correspond to the network input precision and size.
infoPreprocess info for blob.

Implemented in InferenceEngine::InferRequestInternal, and InferenceEngine::AsyncInferRequestThreadSafeInternal.


The documentation for this interface was generated from the following file: