Public Types | Public Member Functions | Protected Member Functions | Protected Attributes
InferenceEngine::ExecutableNetworkThreadSafeDefault Class Referenceabstract

This class provides optimal thread safe default implementation. The class is recommended to be used as a base class for Executable Network impleentation during plugin development. More...

#include <ie_executable_network_thread_safe_default.hpp>

Inheritance diagram for InferenceEngine::ExecutableNetworkThreadSafeDefault:
InferenceEngine::ExecutableNetworkInternal InferenceEngine::IExecutableNetworkInternal

Public Types

typedef std::shared_ptr< ExecutableNetworkThreadSafeDefaultPtr
 A shared pointer to a ExecutableNetworkThreadSafeDefault object.
 
- Public Types inherited from InferenceEngine::ExecutableNetworkInternal
typedef std::shared_ptr< ExecutableNetworkInternalPtr
 A shared pointer to ExecutableNetworkInternal object.
 
- Public Types inherited from InferenceEngine::IExecutableNetworkInternal
typedef std::shared_ptr< IExecutableNetworkInternalPtr
 A shared pointer to IExecutableNetworkInternal interface.
 

Public Member Functions

 ExecutableNetworkThreadSafeDefault (const ITaskExecutor::Ptr &taskExecutor=std::make_shared< CPUStreamsExecutor >(IStreamsExecutor::Config{"Default"}), const ITaskExecutor::Ptr &callbackExecutor=std::make_shared< CPUStreamsExecutor >(IStreamsExecutor::Config{"Callback"}))
 Constructs a new instance. More...
 
IInferRequest::Ptr CreateInferRequest () override
 Given optional implementation of creating asynchronous inference request to avoid need for it to be implemented by plugin. More...
 
- Public Member Functions inherited from InferenceEngine::ExecutableNetworkInternal
virtual void setNetworkInputs (const InferenceEngine::InputsDataMap networkInputs)
 Sets the network inputs info. More...
 
virtual void setNetworkOutputs (const InferenceEngine::OutputsDataMap networkOutputs)
 Sets the network outputs data. More...
 
ConstOutputsDataMap GetOutputsInfo () const override
 Gets the Executable network output Data node information. The received info is stored in the given Data node. This method need to be called to find output names for using them later during filling of a map of blobs passed later to InferenceEngine::IInferencePlugin::Infer() More...
 
ConstInputsDataMap GetInputsInfo () const override
 Gets the Executable network input Data node information. The received info is stored in the given InputsDataMap object. This method need to be called to find out input names for using them later during filling of a map of blobs passed later to InferenceEngine::IInferencePlugin::Infer() More...
 
void Export (const std::string &modelFileName) override
 Export the current created executable network so it can be used later in the Import() main API. More...
 
void Export (std::ostream &networkModel) override
 Export the current created executable network so it can be used later in the Import() main API. More...
 
CNNNetwork GetExecGraphInfo () override
 Get executable graph information from a device. More...
 
void SetPointerToPlugin (IInferencePlugin::Ptr plugin)
 Sets the pointer to plugin internal. More...
 
std::vector< IVariableStateInternal::PtrQueryState () override
 Queries memory states. More...
 
void SetConfig (const std::map< std::string, Parameter > &config) override
 Sets configuration for current executable network. More...
 
Parameter GetConfig (const std::string &name) const override
 Gets configuration dedicated to plugin behaviour. More...
 
Parameter GetMetric (const std::string &name) const override
 Gets general runtime metric for dedicated hardware. More...
 
RemoteContext::Ptr GetContext () const override
 Gets the remote context. More...
 
- Public Member Functions inherited from InferenceEngine::IExecutableNetworkInternal
virtual ~IExecutableNetworkInternal ()=default
 Destroys the object.
 

Protected Member Functions

template<typename AsyncInferRequestType = AsyncInferRequestThreadSafeDefault>
IInferRequest::Ptr CreateAsyncInferRequestFromSync ()
 Creates asyncronous inference request from synchronous request returned by CreateInferRequestImpl. More...
 
virtual InferRequestInternal::Ptr CreateInferRequestImpl (InputsDataMap networkInputs, OutputsDataMap networkOutputs)=0
 Creates a synchronous inference request object used to infer the network. More...
 
- Protected Member Functions inherited from InferenceEngine::ExecutableNetworkInternal
virtual void ExportImpl (std::ostream &networkModel)
 Exports an internal hardware-dependent model to a stream. More...
 

Protected Attributes

ITaskExecutor::Ptr _taskExecutor = nullptr
 Holds a task executor.
 
ITaskExecutor::Ptr _callbackExecutor = nullptr
 Holds a callback executor.
 
- Protected Attributes inherited from InferenceEngine::ExecutableNetworkInternal
InferenceEngine::InputsDataMap _networkInputs
 Holds infromation about network inputs info.
 
InferenceEngine::OutputsDataMap _networkOutputs
 Holds information about network outputs data.
 
IInferencePlugin::Ptr _plugin
 A pointer to a IInferencePlugin interface. More...
 

Detailed Description

This class provides optimal thread safe default implementation. The class is recommended to be used as a base class for Executable Network impleentation during plugin development.

Constructor & Destructor Documentation

◆ ExecutableNetworkThreadSafeDefault()

InferenceEngine::ExecutableNetworkThreadSafeDefault::ExecutableNetworkThreadSafeDefault ( const ITaskExecutor::Ptr taskExecutor = std::make_shared<CPUStreamsExecutor>(IStreamsExecutor::Config{"Default"}),
const ITaskExecutor::Ptr callbackExecutor = std::make_shared<CPUStreamsExecutor>(IStreamsExecutor::Config{"Callback"}) 
)
inlineexplicit

Constructs a new instance.

Parameters
[in]taskExecutorThe task executor used
[in]callbackExecutorThe callback executor

Member Function Documentation

◆ CreateAsyncInferRequestFromSync()

template<typename AsyncInferRequestType = AsyncInferRequestThreadSafeDefault>
IInferRequest::Ptr InferenceEngine::ExecutableNetworkThreadSafeDefault::CreateAsyncInferRequestFromSync ( )
inlineprotected

Creates asyncronous inference request from synchronous request returned by CreateInferRequestImpl.

Template Parameters
AsyncInferRequestTypeA type of asynchronous inference request to use a wrapper for synchronous request
Returns
A shared pointer to an asynchronous inference request

◆ CreateInferRequest()

IInferRequest::Ptr InferenceEngine::ExecutableNetworkThreadSafeDefault::CreateInferRequest ( )
inlineoverridevirtual

Given optional implementation of creating asynchronous inference request to avoid need for it to be implemented by plugin.

Returns
shared_ptr for the created asynchronous inference request

Implements InferenceEngine::IExecutableNetworkInternal.

◆ CreateInferRequestImpl()

virtual InferRequestInternal::Ptr InferenceEngine::ExecutableNetworkThreadSafeDefault::CreateInferRequestImpl ( InputsDataMap  networkInputs,
OutputsDataMap  networkOutputs 
)
protectedpure virtual

Creates a synchronous inference request object used to infer the network.

Note
Used by ExecutableNetworkThreadSafeDefault::CreateInferRequest as a plugin-specific implementation
Parameters
networkInputsAn input info map needed to create input blobs
networkOutputsAn output data map needed to create output blobs
Returns
Synchronous inference request object

The documentation for this class was generated from the following file: