Threading API providing task executors for asynchronous operations.
More...
|
| class | InferenceEngine::ExecutorManager |
| | This is global point for getting task executor objects by string id. It's necessary in multiple asynchronous requests for having unique executors to avoid oversubscription. E.g. There 2 task executors for CPU device: one - in FPGA, another - in MKLDNN. Parallel execution both of them leads to not optimal CPU usage. More efficient to run the corresponding tasks one by one via single executor. More...
|
| |
| class | InferenceEngine::ImmediateExecutor |
| | Task executor implementation that just run tasks in current thread during calling of run() method. More...
|
| |
| class | InferenceEngine::CPUStreamsExecutor |
| | CPU Streams executor implementation. The executor splits the CPU into groups of threads, that can be pinned to cores or NUMA nodes. It uses custom threads to pull tasks from single queue. More...
|
| |
| interface | InferenceEngine::IStreamsExecutor |
| | Interface for Streams Task Executor. This executor groups worker threads into so-called streams. More...
|
| |
| interface | InferenceEngine::ITaskExecutor |
| | Interface for Task Executor. Inference Engine uses InferenceEngine::ITaskExecutor interface to run all asynchronous internal tasks. Different implementations of task executors can be used for different purposes: More...
|
| |
|
|
using | InferenceEngine::Task = std::function< void()> |
| | Inference Engine Task Executor can use any copyable callable without parameters and output as a task. It would be wrapped into std::function object.
|
| |
| template<typename T > |
| using | InferenceEngine::ThreadLocal = tbb::enumerable_thread_specific< T > |
| | A wrapper class to keep object to be thread local. More...
|
| |
Threading API providing task executors for asynchronous operations.