Namespaces | Data Structures | Typedefs | Functions | Variables
InferenceEngine Namespace Reference

Inference Engine Plugin API namespace. More...

Namespaces

 details
 A namespace with non-public Inference Engine Plugin API.
 
 PluginConfigInternalParams
 A namespace with internal plugin configuration keys.
 
 PrecisionUtils
 Namespace for precision utilities.
 

Data Structures

class  AsyncInferRequestInternal
 minimum API to be implemented by plugin, which is used in InferRequestBase forwarding mechanism More...
 
class  AsyncInferRequestThreadSafeDefault
 Base class with default implementation of asynchronous multi staged inference request. To customize pipeline stages derived class should change the content of AsyncInferRequestThreadSafeDefault::_pipeline member container. It consists of pairs of tasks and executors which will run the task. The class is recommended to be used by plugins as a base class for asynchronous inference request implementation. More...
 
class  AsyncInferRequestThreadSafeInternal
 Wrapper of asynchronous inference request to support thread-safe execution. More...
 
class  Blob
 
class  BlockingDesc
 
class  CNNNetwork
 
class  CompoundBlob
 
class  Core
 
class  CPUStreamsExecutor
 CPU Streams executor implementation. The executor splits the CPU into groups of threads, that can be pinned to cores or NUMA nodes. It uses custom threads to pull tasks from single queue. More...
 
class  Data
 
struct  DataConfig
 
struct  DescriptionBuffer
 A description buffer wrapping StatusCode and ResponseDesc. More...
 
class  ExecutableNetwork
 
class  ExecutableNetworkBase
 Executable network noexcept wrapper which accepts IExecutableNetworkInternal derived instance which can throw exceptions. More...
 
class  ExecutableNetworkInternal
 Minimum implementation of IExecutableNetworkInternal interface. Must not be used as a base class in plugins As base classes, use ExecutableNetworkThreadSafeDefault or ExecutableNetworkThreadSafeAsyncOnly. More...
 
class  ExecutableNetworkThreadSafeAsyncOnly
 This class describes an executable network thread safe asynchronous only implementation. More...
 
class  ExecutableNetworkThreadSafeDefault
 This class provides optimal thread safe default implementation. The class is recommended to be used as a base class for Executable Network impleentation during plugin development. More...
 
class  ExecutorManager
 This is global point for getting task executor objects by string id. It's necessary in multiple asynchronous requests for having unique executors to avoid oversubscription. E.g. There 2 task executors for CPU device: one - in FPGA, another - in MKLDNN. Parallel execution both of them leads to not optimal CPU usage. More efficient to run the corresponding tasks one by one via single executor. More...
 
class  Extension
 
class  GeneralError
 
class  I420Blob
 
interface  IAllocator
 
interface  IAsyncInferRequestInternal
 An internal API of asynchronous inference request to be implemented by plugin, which is used in InferRequestBase forwarding mechanism. More...
 
interface  ICNNNetwork
 
interface  ICore
 Minimal ICore interface to allow plugin to get information from Core Inference Engine class. More...
 
class  IExecutableNetwork
 
interface  IExecutableNetworkInternal
 An internal API of executable network to be implemented by plugin, which is used in ExecutableNetworkBase forwarding mechanism. More...
 
class  IExtension
 
interface  IInferencePlugin
 An API of plugin to be implemented by a plugin. More...
 
class  IInferRequest
 
interface  IInferRequestInternal
 An internal API of synchronous inference request to be implemented by plugin, which is used in InferRequestBase forwarding mechanism. More...
 
interface  ILayerExecImpl
 
interface  ILayerImpl
 
interface  IMemoryState
 
class  ImmediateExecutor
 Task executor implementation that just run tasks in current thread during calling of run() method. More...
 
struct  InferenceEngineProfileInfo
 
class  InferencePluginInternal
 Optimal implementation of IInferencePlugin interface to avoid duplication in all plugins. More...
 
class  InferNotStarted
 
class  InferRequest
 
class  InferRequestBase
 Inference request noexcept wrapper which accepts IAsyncInferRequestInternal derived instance which can throw exceptions. More...
 
class  InferRequestInternal
 An optimal implementation of IInferRequestInternal interface to avoid duplication in all plugins This base class is recommended to be used as a base class for plugin synchronous inference request implementation. More...
 
class  InputInfo
 
interface  IStreamsExecutor
 Interface for Streams Task Executor. This executor groups worker threads into so-called streams. More...
 
interface  ITaskExecutor
 Interface for Task Executor. Inference Engine uses InferenceEngine::ITaskExecutor interface to run all asynchronous internal tasks. Different implementations of task executors can be used for different purposes: More...
 
struct  LayerConfig
 
class  LockedMemory
 
class  LockedMemory< const T >
 
class  LockedMemory< void >
 
class  MemoryBlob
 
class  MemoryState
 
class  NetworkNotLoaded
 
class  NotAllocated
 
class  NotFound
 
class  NotImplemented
 
class  NV12Blob
 
class  OutOfBounds
 
class  Parameter
 
class  ParameterMismatch
 
class  Precision
 
struct  PrecisionTrait
 
struct  PreProcessChannel
 
class  PreProcessInfo
 
struct  QueryNetworkResult
 
struct  ReleaseProcessMaskDeleter
 Deleter for process mask. More...
 
class  RemoteBlob
 
class  RemoteContext
 
class  RequestBusy
 
struct  ResponseDesc
 
class  ResultNotReady
 
struct  ROI
 
class  TBlob
 
class  TensorDesc
 
class  Unexpected
 
union  UserValue
 
struct  Version
 

Typedefs

using ExportMagic = std::array< char, 4 >
 Type of magic value.
 
using ie_fp16 = short
 A type difinition for FP16 data type. Defined as a singed short.
 
using Task = std::function< void()>
 Inference Engine Task Executor can use any copyable callable without parameters and output as a task. It would be wrapped into std::function object.
 
using CpuSet = std::unique_ptr< cpu_set_t, ReleaseProcessMaskDeleter >
 A unique pointer to CPU set structure with the ReleaseProcessMaskDeleter deleter.
 
template<typename T >
using ThreadLocal = tbb::enumerable_thread_specific< T >
 A wrapper class to keep object to be thread local. More...
 

Functions

void blob_copy (Blob::Ptr src, Blob::Ptr dst)
 Copies data with taking into account layout and precision params. More...
 
template<class T >
InferenceEngine::ExecutableNetwork make_executable_network (std::shared_ptr< T > impl)
 
static void copyPreProcess (const PreProcessInfo &from, PreProcessInfo &to)
 Copies preprocess info. More...
 
static void copyInputOutputInfo (const InputsDataMap &networkInputs, const OutputsDataMap &networkOutputs, InputsDataMap &_networkInputs, OutputsDataMap &_networkOutputs)
 Copies InputInfo and output Data. More...
 
std::string getIELibraryPath ()
 Returns a path to Inference Engine library. More...
 
inline ::FileUtils::FilePath getInferenceEngineLibraryPath ()
 
std::exception_ptr & CurrentException ()
 Provides the reference to static thread_local std::exception_ptr. More...
 
bool checkOpenMpEnvVars (bool includeOMPNumThreads=true)
 Checks whether OpenMP environment variables are defined. More...
 
std::vector< int > getAvailableNUMANodes ()
 Returns available CPU NUMA nodes (on Linux, and Windows [only with TBB], single node is assumed on all other OSes) More...
 
int getNumberOfCPUCores ()
 Returns number of CPU physical cores on Linux/Windows (which is considered to be more performance friendly for servers) (on other OSes it simply relies on the original parallel API of choice, which usually uses the logical cores ) More...
 
bool with_cpu_x86_sse42 ()
 Checks whether CPU supports SSE 4.2 capability. More...
 
bool with_cpu_x86_avx ()
 Checks whether CPU supports AVX capability. More...
 
bool with_cpu_x86_avx2 ()
 Checks whether CPU supports AVX2 capability. More...
 
bool with_cpu_x86_avx512f ()
 Checks whether CPU supports AVX 512 capability. More...
 
bool with_cpu_x86_avx512_core ()
 Checks whether CPU supports AVX 512 capability. More...
 
bool with_cpu_x86_bfloat16 ()
 Checks whether CPU supports BFloat16 capability. More...
 
void ReleaseProcessMask (cpu_set_t *mask)
 Release the cores affinity mask for the current process. More...
 
std::tuple< CpuSet, int > GetProcessMask ()
 Get the cores affinity mask for the current process. More...
 
bool PinThreadToVacantCore (int thrIdx, int hyperThreads, int ncores, const CpuSet &processMask)
 Pins current thread to a set of cores determined by the mask. More...
 
bool PinCurrentThreadByMask (int ncores, const CpuSet &processMask)
 Pins thread to a spare core in the round-robin scheme, while respecting the given process mask. The function can also handle the hyper-threading (by populating the physical cores first) More...
 
bool PinCurrentThreadToSocket (int socket)
 Pins a current thread to a socket. More...
 

Variables

constexpr static const ExportMagic exportMagic = {{0x1, 0xE, 0xE, 0x1}}
 Magic number used by ie core to identify exported network with plugin name.
 

Detailed Description

Inference Engine Plugin API namespace.

Function Documentation

◆ copyInputOutputInfo()

static void InferenceEngine::copyInputOutputInfo ( const InputsDataMap networkInputs,
const OutputsDataMap networkOutputs,
InputsDataMap _networkInputs,
OutputsDataMap _networkOutputs 
)
static

Copies InputInfo and output Data.

Parameters
[in]networkInputsThe network inputs to copy from
[in]networkOutputsThe network outputs to copy from
_networkInputsThe network inputs to copy to
_networkOutputsThe network outputs to copy to

◆ copyPreProcess()

static void InferenceEngine::copyPreProcess ( const PreProcessInfo from,
PreProcessInfo to 
)
static

Copies preprocess info.

Parameters
[in]fromPreProcessInfo to copy from
toPreProcessInfo to copy to

◆ CurrentException()

std::exception_ptr& InferenceEngine::CurrentException ( )

Provides the reference to static thread_local std::exception_ptr.

Returns
A an exception pointer