Public Types | Public Member Functions | Protected Member Functions | Protected Attributes
InferenceEngine::InferRequestInternal Class Referenceabstract

An optimal implementation of IInferRequestInternal interface to avoid duplication in all plugins This base class is recommended to be used as a base class for plugin synchronous inference request implementation. More...

#include <ie_infer_request_internal.hpp>

Inheritance diagram for InferenceEngine::InferRequestInternal:
InferenceEngine::IInferRequestInternal InferenceEngine::AsyncInferRequestInternal

Public Types

typedef std::shared_ptr< InferRequestInternalPtr
 A shared pointer to a InferRequestInternal implementation.
 
- Public Types inherited from InferenceEngine::IInferRequestInternal
typedef std::shared_ptr< IInferRequestInternalPtr
 A shared pointer to a IInferRequestInternal interface.
 

Public Member Functions

 InferRequestInternal (const InputsDataMap &networkInputs, const OutputsDataMap &networkOutputs)
 Constructs a new instance. More...
 
virtual void InferImpl ()=0
 The minimal infer function to be implemented by plugins. It infers specified input(s) in synchronous mode. More...
 
void Infer () override
 Default common implementation for all plugins with checking input and output blobs before inference.
 
void SetBlob (const char *name, const Blob::Ptr &data) override
 Given optional implementation of setting blob to avoid need for it to be implemented by plugin. More...
 
void GetBlob (const char *name, Blob::Ptr &data) override
 Given optional implementation of getting blob to avoid need for it to be implemented by plugin. More...
 
void SetBlob (const char *name, const Blob::Ptr &data, const PreProcessInfo &info) override
 Sets pre-process for input data. More...
 
void GetPreProcess (const char *name, const PreProcessInfo **info) const override
 Gets pre-process for input data. More...
 
void setPointerToExecutableNetworkInternal (std::shared_ptr< ExecutableNetworkInternal > exeNetwork)
 Sets the pointer to executable network internal. More...
 
virtual void checkBlobs ()
 Checks that both inputs and outputs blob are valid. Throws an exception if they are not.
 
void SetBatch (int batch) override
 Sets new batch size when dynamic batching is enabled in executable network that created this request. More...
 
void execDataPreprocessing (InferenceEngine::BlobMap &inputs, bool serial=false)
 Checks and executes input data pre-processing if needed. More...
 
- Public Member Functions inherited from InferenceEngine::IInferRequestInternal
virtual ~IInferRequestInternal ()=default
 Destroys the object.
 
virtual void GetPerformanceCounts (std::map< std::string, InferenceEngineProfileInfo > &perfMap) const =0
 Queries performance measures per layer to get feedback of what is the most time consuming layer. Note: not all plugins may provide meaningful data. More...
 

Protected Member Functions

bool findInputAndOutputBlobByName (const char *name, InputInfo::Ptr &foundInput, DataPtr &foundOutput) const
 Helper function to find input or output blob by name. More...
 
void checkBlob (const Blob::Ptr &blob, const std::string &name, bool isInput, const SizeVector &refDims={}) const
 Check that blob is valid. Throws an exception if it's not. More...
 
bool preProcessingRequired (const InputInfo::Ptr &info, const Blob::Ptr &blob)
 Checks whether pre-processing step is required for a given input. More...
 

Protected Attributes

InferenceEngine::InputsDataMap _networkInputs
 Holds information about network inputs info.
 
InferenceEngine::OutputsDataMap _networkOutputs
 Holds information about network outputs data.
 
InferenceEngine::BlobMap _inputs
 A map of network input blobs.
 
InferenceEngine::BlobMap _outputs
 A map of network output blobs.
 
std::map< std::string, PreProcessDataPtr > _preProcData
 A map of pre-process data per input.
 
int m_curBatch
 Current batch value used in dynamic batching.
 
std::shared_ptr< ExecutableNetworkInternal_exeNetwork
 A shared pointer to ExecutableNetworkInternal interface. More...
 

Detailed Description

An optimal implementation of IInferRequestInternal interface to avoid duplication in all plugins This base class is recommended to be used as a base class for plugin synchronous inference request implementation.

Constructor & Destructor Documentation

◆ InferRequestInternal()

InferenceEngine::InferRequestInternal::InferRequestInternal ( const InputsDataMap networkInputs,
const OutputsDataMap networkOutputs 
)
inline

Constructs a new instance.

Parameters
[in]networkInputsThe network inputs info
[in]networkOutputsThe network outputs data

Member Function Documentation

◆ checkBlob()

void InferenceEngine::InferRequestInternal::checkBlob ( const Blob::Ptr blob,
const std::string &  name,
bool  isInput,
const SizeVector refDims = {} 
) const
inlineprotected

Check that blob is valid. Throws an exception if it's not.

Parameters
[in]blobThe blob to check
[in]nameThe name of input or output depending of if the blob is input or output
[in]isInputIndicates if is input
[in]refDimsThe reference dims, empty if not specified

◆ execDataPreprocessing()

void InferenceEngine::InferRequestInternal::execDataPreprocessing ( InferenceEngine::BlobMap inputs,
bool  serial = false 
)
inline

Checks and executes input data pre-processing if needed.

Parameters
inputsInputs blobs to perform preprocessing on
serialWhether to use multiple threads to execute the step

◆ findInputAndOutputBlobByName()

bool InferenceEngine::InferRequestInternal::findInputAndOutputBlobByName ( const char *  name,
InputInfo::Ptr foundInput,
DataPtr foundOutput 
) const
inlineprotected

Helper function to find input or output blob by name.

Parameters
nameA name of input or output blob.
foundInputA pointer to input information if found.
foundOutputA pointer to output DataPtr if found.
Returns
True - if loaded network has input with provided name, false - if loaded network has output with provided name
Exceptions
[parameter_mismatch]exception if input and output has the same name
[not_found]exception if there is no input and output layers with given name

◆ GetBlob()

void InferenceEngine::InferRequestInternal::GetBlob ( const char *  name,
Blob::Ptr data 
)
inlineoverridevirtual

Given optional implementation of getting blob to avoid need for it to be implemented by plugin.

Parameters
name- a name of input or output blob.
data- a reference to input or output blob. The type of Blob must correspond to the network input precision and size.
Note
if ROI blob was previously set it is returned (without dimensions checks) instead of default blob.

Implements InferenceEngine::IInferRequestInternal.

◆ GetPreProcess()

void InferenceEngine::InferRequestInternal::GetPreProcess ( const char *  name,
const PreProcessInfo **  info 
) const
inlineoverridevirtual

Gets pre-process for input data.

Parameters
nameName of input blob.
infopointer to a pointer to PreProcessInfo structure

Implements InferenceEngine::IInferRequestInternal.

◆ InferImpl()

virtual void InferenceEngine::InferRequestInternal::InferImpl ( )
pure virtual

The minimal infer function to be implemented by plugins. It infers specified input(s) in synchronous mode.

Note
  • This method is used in InferRequestInternal::Infer, which calls the common code first and after uses this plugin dependent implementation.
  • Blocks all method of IInferRequest while request is ongoing (running or waiting in queue)

◆ preProcessingRequired()

bool InferenceEngine::InferRequestInternal::preProcessingRequired ( const InputInfo::Ptr info,
const Blob::Ptr blob 
)
inlineprotected

Checks whether pre-processing step is required for a given input.

Parameters
infoInputInfo corresponding to input blob
blobInput Blob object corresponding to input info
Returns
True if pre-processing is required, false otherwise

◆ SetBatch()

void InferenceEngine::InferRequestInternal::SetBatch ( int  batch)
inlineoverridevirtual

Sets new batch size when dynamic batching is enabled in executable network that created this request.

Parameters
batch- new batch size to be used by all the following inference calls for this request.

Implements InferenceEngine::IInferRequestInternal.

◆ SetBlob() [1/2]

void InferenceEngine::InferRequestInternal::SetBlob ( const char *  name,
const Blob::Ptr data 
)
inlineoverridevirtual

Given optional implementation of setting blob to avoid need for it to be implemented by plugin.

Parameters
name- a name of input or output blob.
data- a reference to input or output blob. The type of Blob must correspond to the network input precision and size.

Implements InferenceEngine::IInferRequestInternal.

◆ SetBlob() [2/2]

void InferenceEngine::InferRequestInternal::SetBlob ( const char *  name,
const Blob::Ptr data,
const PreProcessInfo info 
)
inlineoverridevirtual

Sets pre-process for input data.

Parameters
nameName of input blob.
data- a reference to input or output blob. The type of Blob must correspond to the network input precision and size.
infoPreprocess info for blob.

Implements InferenceEngine::IInferRequestInternal.

◆ setPointerToExecutableNetworkInternal()

void InferenceEngine::InferRequestInternal::setPointerToExecutableNetworkInternal ( std::shared_ptr< ExecutableNetworkInternal exeNetwork)
inline

Sets the pointer to executable network internal.

Note
Needed to correctly handle ownership between objects.
Parameters
[in]exeNetworkThe executable network

Field Documentation

◆ _exeNetwork

std::shared_ptr<ExecutableNetworkInternal> InferenceEngine::InferRequestInternal::_exeNetwork
protected

A shared pointer to ExecutableNetworkInternal interface.

Note
Needed to correctly handle ownership between objects.

The documentation for this class was generated from the following file: