Data Structures | Public Types | Public Member Functions
InferenceEngine::IStreamsExecutor Interface Referenceabstract

Interface for Streams Task Executor. This executor groups worker threads into so-called streams. More...

#include <ie_istreams_executor.hpp>

Inheritance diagram for InferenceEngine::IStreamsExecutor:
InferenceEngine::ITaskExecutor InferenceEngine::CPUStreamsExecutor

Data Structures

struct  Config
 Defines IStreamsExecutor configuration. More...
 

Public Types

enum  ThreadBindingType : std::uint8_t {
  NONE,
  CORES,
  NUMA
}
 Defines thread binding type. More...
 
using Ptr = std::shared_ptr< IStreamsExecutor >
 
- Public Types inherited from InferenceEngine::ITaskExecutor
using Ptr = std::shared_ptr< ITaskExecutor >
 

Public Member Functions

 ~IStreamsExecutor () override
 A virtual destructor.
 
virtual int GetStreamId ()=0
 Return the index of current stream. More...
 
virtual int GetNumaNodeId ()=0
 Return the id of current NUMA Node. More...
 
virtual void Execute (Task task)=0
 Execute the task in the current thread using streams executor configuration and constraints. More...
 
- Public Member Functions inherited from InferenceEngine::ITaskExecutor
virtual ~ITaskExecutor ()=default
 Destroys the object.
 
virtual void run (Task task)=0
 Execute InferenceEngine::Task inside task executor context. More...
 
virtual void runAndWait (const std::vector< Task > &tasks)
 Execute all of the tasks and waits for its completion. Default runAndWait() method implementation uses run() pure virtual method and higher level synchronization primitives from STL. The task is wrapped into std::packaged_task which returns std::future. std::packaged_task will call the task and signal to std::future that the task is finished or the exception is thrown from task Than std::future is used to wait for task execution completion and task exception extraction. More...
 

Detailed Description

Interface for Streams Task Executor. This executor groups worker threads into so-called streams.

CPU
The executor executes all parallel tasks using threads from one stream. With proper pinning settings it should reduce cache misses for memory bound workloads.
NUMA
On NUMA hosts GetNumaNodeId() method can be used to define the NUMA node of current stream

Member Typedef Documentation

◆ Ptr

A shared pointer to IStreamsExecutor interface

Member Enumeration Documentation

◆ ThreadBindingType

Defines thread binding type.

Enumerator
NONE 

Don't bind threads.

CORES 

Bind threads to cores.

NUMA 

Bind threads to NUMA nodes.

Member Function Documentation

◆ Execute()

virtual void InferenceEngine::IStreamsExecutor::Execute ( Task  task)
pure virtual

Execute the task in the current thread using streams executor configuration and constraints.

Parameters
taskA task to start

Implemented in InferenceEngine::CPUStreamsExecutor.

◆ GetNumaNodeId()

virtual int InferenceEngine::IStreamsExecutor::GetNumaNodeId ( )
pure virtual

Return the id of current NUMA Node.

Returns
ID of current NUMA Node, or throws exceptions if called not from stream thread

Implemented in InferenceEngine::CPUStreamsExecutor.

◆ GetStreamId()

virtual int InferenceEngine::IStreamsExecutor::GetStreamId ( )
pure virtual

Return the index of current stream.

Returns
An index of current stream. Or throw exceptions if called not from stream thread

Implemented in InferenceEngine::CPUStreamsExecutor.


The documentation for this interface was generated from the following file: