Data Structures | Typedefs | Functions
Threading utilities

Threading API providing task executors for asynchronous operations. More...

Data Structures

class  InferenceEngine::ExecutorManager
 This is global point for getting task executor objects by string id. It's necessary in multiple asynchronous requests for having unique executors to avoid oversubscription. E.g. There 2 task executors for CPU device: one - in FPGA, another - in MKLDNN. Parallel execution both of them leads to not optimal CPU usage. More efficient to run the corresponding tasks one by one via single executor. More...
 
class  InferenceEngine::ImmediateExecutor
 Task executor implementation that just run tasks in current thread during calling of run() method. More...
 
struct  InferenceEngine::ReleaseProcessMaskDeleter
 Deleter for process mask. More...
 
class  InferenceEngine::CPUStreamsExecutor
 CPU Streams executor implementation. The executor splits the CPU into groups of threads, that can be pinned to cores or NUMA nodes. It uses custom threads to pull tasks from single queue. More...
 
interface  InferenceEngine::IStreamsExecutor
 Interface for Streams Task Executor. This executor groups worker threads into so-called streams. More...
 
interface  InferenceEngine::ITaskExecutor
 Interface for Task Executor. Inference Engine uses InferenceEngine::ITaskExecutor interface to run all asynchronous internal tasks. Different implementations of task executors can be used for different purposes: More...
 

Typedefs

using InferenceEngine::Task = std::function< void()>
 Inference Engine Task Executor can use any copyable callable without parameters and output as a task. It would be wrapped into std::function object.
 
using InferenceEngine::CpuSet = std::unique_ptr< cpu_set_t, ReleaseProcessMaskDeleter >
 A unique pointer to CPU set structure with the ReleaseProcessMaskDeleter deleter.
 
template<typename T >
using InferenceEngine::ThreadLocal = tbb::enumerable_thread_specific< T >
 A wrapper class to keep object to be thread local. More...
 

Functions

void InferenceEngine::ReleaseProcessMask (cpu_set_t *mask)
 Release the cores affinity mask for the current process. More...
 
std::tuple< CpuSet, int > InferenceEngine::GetProcessMask ()
 Get the cores affinity mask for the current process. More...
 
bool InferenceEngine::PinThreadToVacantCore (int thrIdx, int hyperThreads, int ncores, const CpuSet &processMask)
 Pins current thread to a set of cores determined by the mask. More...
 
bool InferenceEngine::PinCurrentThreadByMask (int ncores, const CpuSet &processMask)
 Pins thread to a spare core in the round-robin scheme, while respecting the given process mask. The function can also handle the hyper-threading (by populating the physical cores first) More...
 
bool InferenceEngine::PinCurrentThreadToSocket (int socket)
 Pins a current thread to a socket. More...
 

Detailed Description

Threading API providing task executors for asynchronous operations.

Typedef Documentation

◆ ThreadLocal

template<typename T >
using InferenceEngine::ThreadLocal = typedef tbb::enumerable_thread_specific<T>

A wrapper class to keep object to be thread local.

Template Parameters
TA type of object to keep thread local.

Function Documentation

◆ GetProcessMask()

std::tuple<CpuSet, int> InferenceEngine::GetProcessMask ( )

Get the cores affinity mask for the current process.

Returns
A core affinity mask

◆ PinCurrentThreadByMask()

bool InferenceEngine::PinCurrentThreadByMask ( int  ncores,
const CpuSet processMask 
)

Pins thread to a spare core in the round-robin scheme, while respecting the given process mask. The function can also handle the hyper-threading (by populating the physical cores first)

Parameters
[in]ncoresThe ncores
[in]processMaskThe process mask
Returns
True in case of success, false otherwise

◆ PinCurrentThreadToSocket()

bool InferenceEngine::PinCurrentThreadToSocket ( int  socket)

Pins a current thread to a socket.

Parameters
[in]socketThe socket id
Returns
True in case of success, false otherwise

◆ PinThreadToVacantCore()

bool InferenceEngine::PinThreadToVacantCore ( int  thrIdx,
int  hyperThreads,
int  ncores,
const CpuSet processMask 
)

Pins current thread to a set of cores determined by the mask.

Parameters
[in]thrIdxThe thr index
[in]hyperThreadsThe hyper threads
[in]ncoresThe ncores
[in]processMaskThe process mask
Returns
True in case of success, false otherwise

◆ ReleaseProcessMask()

void InferenceEngine::ReleaseProcessMask ( cpu_set_t *  mask)

Release the cores affinity mask for the current process.

Parameters
maskThe mask