Public Types | Public Member Functions | Protected Member Functions | Protected Attributes
InferenceEngine::InferRequestInternal Class Referenceabstract

An optimal implementation of IInferRequestInternal interface to avoid duplication in all plugins This base class is recommended to be used as a base class for plugin synchronous inference request implementation. More...

#include <ie_infer_request_internal.hpp>

Inheritance diagram for InferenceEngine::InferRequestInternal:
InferenceEngine::IInferRequestInternal InferenceEngine::AsyncInferRequestInternal

Public Types

typedef std::shared_ptr< InferRequestInternalPtr
 A shared pointer to a InferRequestInternal implementation.
 
- Public Types inherited from InferenceEngine::IInferRequestInternal
typedef std::shared_ptr< IInferRequestInternalPtr
 A shared pointer to a IInferRequestInternal interface.
 

Public Member Functions

 InferRequestInternal (const InputsDataMap &networkInputs, const OutputsDataMap &networkOutputs)
 Constructs a new instance. More...
 
virtual void InferImpl ()=0
 The minimal infer function to be implemented by plugins. It infers specified input(s) in synchronous mode. More...
 
void Infer () override
 Default common implementation for all plugins with checking input and output blobs before inference.
 
void Cancel () override
 Default common implementation for all plugins.
 
void SetBlob (const std::string &name, const Blob::Ptr &userBlob) override
 Given optional implementation of setting blob to avoid need for it to be implemented by plugin. More...
 
Blob::Ptr GetBlob (const std::string &name) override
 Given optional implementation of getting blob to avoid need for it to be implemented by plugin. More...
 
void SetBlob (const std::string &name, const Blob::Ptr &data, const PreProcessInfo &info) override
 Sets pre-process for input data. More...
 
const PreProcessInfoGetPreProcess (const std::string &name) const override
 Gets pre-process for input data. More...
 
void SetBatch (int batch) override
 Sets new batch size when dynamic batching is enabled in executable network that created this request. More...
 
void setPointerToExecutableNetworkInternal (std::shared_ptr< ExecutableNetworkInternal > exeNetwork)
 Sets the pointer to executable network internal. More...
 
virtual void checkBlobs ()
 Checks that both inputs and outputs blob are valid. Throws an exception if they are not.
 
std::vector< IVariableStateInternal::PtrQueryState () override
 Queries memory states. More...
 
- Public Member Functions inherited from InferenceEngine::IInferRequestInternal
virtual ~IInferRequestInternal ()=default
 Destroys the object.
 
virtual std::map< std::string, InferenceEngineProfileInfoGetPerformanceCounts () const =0
 Queries performance measures per layer to get feedback of what is the most time consuming layer. Note: not all plugins may provide meaningful data. More...
 

Protected Member Functions

void execDataPreprocessing (InferenceEngine::BlobMap &preprocessedBlobs, bool serial=false)
 Checks and executes input data pre-processing if needed. More...
 
bool findInputAndOutputBlobByName (const std::string &name, InputInfo::Ptr &foundInput, DataPtr &foundOutput) const
 Helper function to find input or output blob by name. More...
 
void checkBlob (const Blob::Ptr &blob, const std::string &name, bool isInput, const SizeVector &refDims={}) const
 Check that blob is valid. Throws an exception if it's not. More...
 
bool preProcessingRequired (const InputInfo::Ptr &info, const Blob::Ptr &userBlob, const Blob::Ptr &deviceBlob=nullptr)
 Checks whether pre-processing step is required for a given input. More...
 
void addInputPreProcessingFor (const std::string &name, Blob::Ptr const &from, const Blob::Ptr &to)
 

Protected Attributes

InferenceEngine::InputsDataMap _networkInputs
 Holds information about network inputs info.
 
InferenceEngine::OutputsDataMap _networkOutputs
 Holds information about network outputs data.
 
InferenceEngine::BlobMap _inputs
 A map of user passed blobs for network inputs.
 
InferenceEngine::BlobMap _deviceInputs
 A map of actual network inputs, in plugin specific format.
 
InferenceEngine::BlobMap _outputs
 A map of user passed blobs for network outputs.
 
std::map< std::string, PreProcessDataPtr > _preProcData
 A map of pre-process data per input.
 
int m_curBatch
 Current batch value used in dynamic batching.
 
std::shared_ptr< ExecutableNetworkInternal_exeNetwork
 A shared pointer to ExecutableNetworkInternal interface. More...
 

Detailed Description

An optimal implementation of IInferRequestInternal interface to avoid duplication in all plugins This base class is recommended to be used as a base class for plugin synchronous inference request implementation.

Constructor & Destructor Documentation

◆ InferRequestInternal()

InferenceEngine::InferRequestInternal::InferRequestInternal ( const InputsDataMap networkInputs,
const OutputsDataMap networkOutputs 
)
inline

Constructs a new instance.

Parameters
[in]networkInputsThe network inputs info
[in]networkOutputsThe network outputs data

Member Function Documentation

◆ checkBlob()

void InferenceEngine::InferRequestInternal::checkBlob ( const Blob::Ptr blob,
const std::string &  name,
bool  isInput,
const SizeVector refDims = {} 
) const
inlineprotected

Check that blob is valid. Throws an exception if it's not.

Parameters
[in]blobThe blob to check
[in]nameThe name of input or output depending of if the blob is input or output
[in]isInputIndicates if is input
[in]refDimsThe reference dims, empty if not specified

◆ execDataPreprocessing()

void InferenceEngine::InferRequestInternal::execDataPreprocessing ( InferenceEngine::BlobMap preprocessedBlobs,
bool  serial = false 
)
inlineprotected

Checks and executes input data pre-processing if needed.

Parameters
inputsInputs blobs to perform preprocessing on
serialWhether to use multiple threads to execute the step

◆ findInputAndOutputBlobByName()

bool InferenceEngine::InferRequestInternal::findInputAndOutputBlobByName ( const std::string &  name,
InputInfo::Ptr foundInput,
DataPtr foundOutput 
) const
inlineprotected

Helper function to find input or output blob by name.

Parameters
nameA name of input or output blob.
foundInputA pointer to input information if found.
foundOutputA pointer to output DataPtr if found.
Returns
True - if loaded network has input with provided name, false - if loaded network has output with provided name
Exceptions
[parameter_mismatch]exception if input and output has the same name
[not_found]exception if there is no input and output layers with given name

◆ GetBlob()

Blob::Ptr InferenceEngine::InferRequestInternal::GetBlob ( const std::string &  name)
inlineoverridevirtual

Given optional implementation of getting blob to avoid need for it to be implemented by plugin.

Parameters
name- a name of input or output blob.
Returns
Returns input or output blob. The type of Blob must correspond to the network input precision and size.
Note
if ROI blob was previously set it is returned (without dimensions checks) instead of default blob.

Implements InferenceEngine::IInferRequestInternal.

◆ GetPreProcess()

const PreProcessInfo& InferenceEngine::InferRequestInternal::GetPreProcess ( const std::string &  name) const
inlineoverridevirtual

Gets pre-process for input data.

Parameters
nameName of input blob.
Returns
Returns constant reference to PreProcessInfo structure

Implements InferenceEngine::IInferRequestInternal.

◆ InferImpl()

virtual void InferenceEngine::InferRequestInternal::InferImpl ( )
pure virtual

The minimal infer function to be implemented by plugins. It infers specified input(s) in synchronous mode.

Note
  • This method is used in InferRequestInternal::Infer, which calls the common code first and after uses this plugin dependent implementation.
  • Blocks all method of IInferRequest while request is ongoing (running or waiting in queue)

◆ preProcessingRequired()

bool InferenceEngine::InferRequestInternal::preProcessingRequired ( const InputInfo::Ptr info,
const Blob::Ptr userBlob,
const Blob::Ptr deviceBlob = nullptr 
)
inlineprotected

Checks whether pre-processing step is required for a given input.

Parameters
infoInputInfo corresponding to input blob
userBlobInput Blob object corresponding to input info
deviceBlobBlob object in plugin's desired format
Returns
True if pre-processing is required, false otherwise

◆ QueryState()

std::vector<IVariableStateInternal::Ptr> InferenceEngine::InferRequestInternal::QueryState ( )
inlineoverridevirtual

Queries memory states.

Returns
Returns memory states

Implements InferenceEngine::IInferRequestInternal.

◆ SetBatch()

void InferenceEngine::InferRequestInternal::SetBatch ( int  batch)
inlineoverridevirtual

Sets new batch size when dynamic batching is enabled in executable network that created this request.

Parameters
batch- new batch size to be used by all the following inference calls for this request.

Implements InferenceEngine::IInferRequestInternal.

◆ SetBlob() [1/2]

void InferenceEngine::InferRequestInternal::SetBlob ( const std::string &  name,
const Blob::Ptr data,
const PreProcessInfo info 
)
inlineoverridevirtual

Sets pre-process for input data.

Parameters
nameName of input blob.
data- a reference to input or output blob. The type of Blob must correspond to the network input precision and size.
infoPreprocess info for blob.

Implements InferenceEngine::IInferRequestInternal.

◆ SetBlob() [2/2]

void InferenceEngine::InferRequestInternal::SetBlob ( const std::string &  name,
const Blob::Ptr userBlob 
)
inlineoverridevirtual

Given optional implementation of setting blob to avoid need for it to be implemented by plugin.

Parameters
name- a name of input or output blob.
data- a reference to input or output blob. The type of Blob must correspond to the network input precision and size.

Implements InferenceEngine::IInferRequestInternal.

◆ setPointerToExecutableNetworkInternal()

void InferenceEngine::InferRequestInternal::setPointerToExecutableNetworkInternal ( std::shared_ptr< ExecutableNetworkInternal exeNetwork)
inline

Sets the pointer to executable network internal.

Note
Needed to correctly handle ownership between objects.
Parameters
[in]exeNetworkThe executable network

Field Documentation

◆ _exeNetwork

std::shared_ptr<ExecutableNetworkInternal> InferenceEngine::InferRequestInternal::_exeNetwork
protected

A shared pointer to ExecutableNetworkInternal interface.

Note
Needed to correctly handle ownership between objects.

The documentation for this class was generated from the following file: