Namespaces | Data Structures | Typedefs | Enumerations | Functions | Variables
InferenceEngine Namespace Reference

Inference Engine Plugin API namespace. More...

Namespaces

 details
 A namespace with non-public Inference Engine Plugin API.
 
 PluginConfigInternalParams
 A namespace with internal plugin configuration keys.
 
 PrecisionUtils
 Namespace for precision utilities.
 

Data Structures

class  BatchedBlob
 
class  Blob
 
class  BlockingDesc
 
class  CNNNetwork
 
class  CompoundBlob
 
class  Core
 
class  Data
 
struct  DataConfig
 
class  ExecutableNetwork
 
class  Extension
 
class  GeneralError
 
class  I420Blob
 
interface  IAllocator
 
interface  ICNNNetwork
 
class  IExecutableNetwork
 
class  IExtension
 
class  IInferRequest
 
interface  ILayerExecImpl
 
interface  ILayerImpl
 
class  InferCancelled
 
struct  InferenceEngineProfileInfo
 
class  InferNotStarted
 
class  InferRequest
 
class  InputInfo
 
interface  IVariableState
 
struct  LayerConfig
 
class  LockedMemory
 
class  LockedMemory< const T >
 
class  LockedMemory< void >
 
class  MemoryBlob
 
class  NetworkNotLoaded
 
class  NetworkNotRead
 
class  NotAllocated
 
class  NotFound
 
class  NotImplemented
 
class  NV12Blob
 
class  OutOfBounds
 
class  Parameter
 
class  ParameterMismatch
 
class  Precision
 
struct  PrecisionTrait
 
struct  PreProcessChannel
 
class  PreProcessInfo
 
struct  QueryNetworkResult
 
class  RemoteBlob
 
class  RemoteContext
 
class  RequestBusy
 
struct  ResponseDesc
 
class  ResultNotReady
 
struct  ROI
 
class  TBlob
 
class  TensorDesc
 
class  Unexpected
 
union  UserValue
 
class  VariableState
 
struct  Version
 
class  ExecutableNetworkBase
 Executable network noexcept wrapper which accepts IExecutableNetworkInternal derived instance which can throw exceptions. More...
 
class  InferRequestBase
 Inference request noexcept wrapper which accepts IAsyncInferRequestInternal derived instance which can throw exceptions. More...
 
class  VariableStateBase
 Default implementation for IVariableState. More...
 
class  ExecutableNetworkInternal
 Minimum implementation of IExecutableNetworkInternal interface. Must not be used as a base class in plugins As base classes, use ExecutableNetworkThreadSafeDefault or ExecutableNetworkThreadSafeAsyncOnly. More...
 
class  ExecutableNetworkThreadSafeAsyncOnly
 This class describes an executable network thread safe asynchronous only implementation. More...
 
class  ExecutableNetworkThreadSafeDefault
 This class provides optimal thread safe default implementation. The class is recommended to be used as a base class for Executable Network impleentation during plugin development. More...
 
class  AsyncInferRequestInternal
 minimum API to be implemented by plugin, which is used in InferRequestBase forwarding mechanism More...
 
class  AsyncInferRequestThreadSafeDefault
 Base class with default implementation of asynchronous multi staged inference request. To customize pipeline stages derived class should change the content of AsyncInferRequestThreadSafeDefault::_pipeline member container. It consists of pairs of tasks and executors which will run the task. The class is recommended to be used by plugins as a base class for asynchronous inference request implementation. More...
 
class  InferRequestInternal
 An optimal implementation of IInferRequestInternal interface to avoid duplication in all plugins This base class is recommended to be used as a base class for plugin synchronous inference request implementation. More...
 
class  InferencePluginInternal
 Optimal implementation of IInferencePlugin interface to avoid duplication in all plugins. More...
 
class  VariableStateInternal
 Minimal interface for variable state implementation. More...
 
interface  IExecutableNetworkInternal
 An internal API of executable network to be implemented by plugin, which is used in ExecutableNetworkBase forwarding mechanism. More...
 
interface  IAsyncInferRequestInternal
 An internal API of asynchronous inference request to be implemented by plugin, which is used in InferRequestBase forwarding mechanism. More...
 
interface  IInferRequestInternal
 An internal API of synchronous inference request to be implemented by plugin, which is used in InferRequestBase forwarding mechanism. More...
 
interface  IInferencePlugin
 An API of plugin to be implemented by a plugin. More...
 
interface  IVariableStateInternal
 Minimal interface for variable state implementation. More...
 
struct  DescriptionBuffer
 A description buffer wrapping StatusCode and ResponseDesc. More...
 
interface  ICore
 Minimal ICore interface to allow plugin to get information from Core Inference Engine class. More...
 
class  CPUStreamsExecutor
 CPU Streams executor implementation. The executor splits the CPU into groups of threads, that can be pinned to cores or NUMA nodes. It uses custom threads to pull tasks from single queue. More...
 
class  ExecutorManager
 This is global point for getting task executor objects by string id. It's necessary in multiple asynchronous requests for having unique executors to avoid oversubscription. E.g. There 2 task executors for CPU device: one - in FPGA, another - in MKLDNN. Parallel execution both of them leads to not optimal CPU usage. More efficient to run the corresponding tasks one by one via single executor. More...
 
class  ImmediateExecutor
 Task executor implementation that just run tasks in current thread during calling of run() method. More...
 
interface  IStreamsExecutor
 Interface for Streams Task Executor. This executor groups worker threads into so-called streams. More...
 
interface  ITaskExecutor
 Interface for Task Executor. Inference Engine uses InferenceEngine::ITaskExecutor interface to run all asynchronous internal tasks. Different implementations of task executors can be used for different purposes: More...
 
struct  ReleaseProcessMaskDeleter
 Deleter for process mask. More...
 

Typedefs

typedef VariableState MemoryState
 
typedef void * gpu_handle_param
 
typedef std::map< std::string, Blob::PtrBlobMap
 
typedef std::vector< size_t > SizeVector
 
typedef std::shared_ptr< DataDataPtr
 
typedef std::shared_ptr< const DataCDataPtr
 
typedef std::weak_ptr< DataDataWeakPtr
 
typedef std::map< std::string, DataPtrOutputsDataMap
 
typedef std::map< std::string, CDataPtrConstOutputsDataMap
 
typedef std::shared_ptr< IExtensionIExtensionPtr
 
typedef IVariableState IMemoryState
 
typedef std::map< std::string, InputInfo::PtrInputsDataMap
 
typedef std::map< std::string, InputInfo::CPtrConstInputsDataMap
 
typedef std::map< std::string, ParameterParamMap
 
using MemoryStateInternal = VariableStateInternal
 For compatibility reasons.
 
using IMemoryStateInternal = IVariableStateInternal
 For compatibility reasons.
 
using ExportMagic = std::array< char, 4 >
 Type of magic value.
 
using ie_fp16 = short
 A type difinition for FP16 data type. Defined as a singed short.
 
using Task = std::function< void()>
 Inference Engine Task Executor can use any copyable callable without parameters and output as a task. It would be wrapped into std::function object.
 
using CpuSet = std::unique_ptr< cpu_set_t, ReleaseProcessMaskDeleter >
 A unique pointer to CPU set structure with the ReleaseProcessMaskDeleter deleter.
 
template<typename T >
using ThreadLocal = tbb::enumerable_thread_specific< T >
 A wrapper class to keep object to be thread local. More...
 

Enumerations

enum  LockOp
 
enum  Layout
 
enum  ColorFormat
 
enum  StatusCode
 
enum  MeanVariant
 
enum  ResizeAlgorithm
 

Functions

std::shared_ptr< T > make_so_pointer (const std::string &name)=delete
 
InferenceEngine::IAllocatorCreateDefaultAllocator () noexcept
 
std::shared_ptr< T > as (const Blob::Ptr &blob) noexcept
 
std::shared_ptr< const T > as (const Blob::CPtr &blob) noexcept
 
InferenceEngine::TBlob< Type >::Ptr make_shared_blob (const TensorDesc &tensorDesc)
 
InferenceEngine::TBlob< Type >::Ptr make_shared_blob (const TensorDesc &tensorDesc, Type *ptr, size_t size=0)
 
InferenceEngine::TBlob< Type >::Ptr make_shared_blob (const TensorDesc &tensorDesc, const std::shared_ptr< InferenceEngine::IAllocator > &alloc)
 
InferenceEngine::TBlob< TypeTo >::Ptr make_shared_blob (const TBlob< TypeTo > &arg)
 
std::shared_ptr< T > make_shared_blob (Args &&... args)
 
Blob::Ptr make_shared_blob (const Blob::Ptr &inputBlob, const ROI &roi)
 
std::ostream & operator<< (std::ostream &out, const Layout &p)
 
std::ostream & operator<< (std::ostream &out, const ColorFormat &fmt)
 
StatusCode CreateExtension (IExtension *&ext, ResponseDesc *resp) noexcept
 
TensorDesc make_roi_desc (const TensorDesc &origDesc, const ROI &roi, bool useOrigMemDesc)
 
RemoteBlob::Ptr make_shared_blob (const TensorDesc &desc, RemoteContext::Ptr ctx)
 
void LowLatency (InferenceEngine::CNNNetwork &network)
 
std::string fileNameToString (const file_name_t &str)
 
file_name_t stringToFileName (const std::string &str)
 
const VersionGetInferenceEngineVersion () noexcept
 
void blob_copy (Blob::Ptr src, Blob::Ptr dst)
 Copies data with taking into account layout and precision params. More...
 
template<class T >
InferenceEngine::ExecutableNetwork make_executable_network (std::shared_ptr< T > impl)
 Create an execuable network public C++ object wrapper based on internal inplementation. More...
 
static void copyPreProcess (const PreProcessInfo &from, PreProcessInfo &to)
 Copies preprocess info. More...
 
void copyInputOutputInfo (const InputsDataMap &networkInputs, const OutputsDataMap &networkOutputs, InputsDataMap &_networkInputs, OutputsDataMap &_networkOutputs)
 Copies InputInfo and output Data. More...
 
std::string getIELibraryPath ()
 Returns a path to Inference Engine library. More...
 
inline ::FileUtils::FilePath getInferenceEngineLibraryPath ()
 
std::exception_ptr & CurrentException ()
 Provides the reference to static thread_local std::exception_ptr. More...
 
bool checkOpenMpEnvVars (bool includeOMPNumThreads=true)
 Checks whether OpenMP environment variables are defined. More...
 
std::vector< int > getAvailableNUMANodes ()
 Returns available CPU NUMA nodes (on Linux, and Windows [only with TBB], single node is assumed on all other OSes) More...
 
int getNumberOfCPUCores ()
 Returns number of CPU physical cores on Linux/Windows (which is considered to be more performance friendly for servers) (on other OSes it simply relies on the original parallel API of choice, which usually uses the logical cores ) More...
 
bool with_cpu_x86_sse42 ()
 Checks whether CPU supports SSE 4.2 capability. More...
 
bool with_cpu_x86_avx ()
 Checks whether CPU supports AVX capability. More...
 
bool with_cpu_x86_avx2 ()
 Checks whether CPU supports AVX2 capability. More...
 
bool with_cpu_x86_avx512f ()
 Checks whether CPU supports AVX 512 capability. More...
 
bool with_cpu_x86_avx512_core ()
 Checks whether CPU supports AVX 512 capability. More...
 
bool with_cpu_x86_bfloat16 ()
 Checks whether CPU supports BFloat16 capability. More...
 
void ReleaseProcessMask (cpu_set_t *mask)
 Release the cores affinity mask for the current process. More...
 
std::tuple< CpuSet, int > GetProcessMask ()
 Get the cores affinity mask for the current process. More...
 
bool PinThreadToVacantCore (int thrIdx, int hyperThreads, int ncores, const CpuSet &processMask)
 Pins current thread to a set of cores determined by the mask. More...
 
bool PinCurrentThreadByMask (int ncores, const CpuSet &processMask)
 Pins thread to a spare core in the round-robin scheme, while respecting the given process mask. The function can also handle the hyper-threading (by populating the physical cores first) More...
 
bool PinCurrentThreadToSocket (int socket)
 Pins a current thread to a socket. More...
 

Variables

 LOCK_FOR_READ
 
 LOCK_FOR_WRITE
 
 ANY
 
 NCHW
 
 NHWC
 
 NCDHW
 
 NDHWC
 
 OIHW
 
 GOIHW
 
 OIDHW
 
 GOIDHW
 
 SCALAR
 
 C
 
 CHW
 
 HWC
 
 HW
 
 NC
 
 CN
 
 BLOCKED
 
 RAW
 
 RGB
 
 BGR
 
 RGBX
 
 BGRX
 
 NV12
 
 I420
 
 MEAN_IMAGE
 
 MEAN_VALUE
 
 NONE
 
static constexpr auto HDDL_GRAPH_TAG
 
static constexpr auto HDDL_STREAM_ID
 
static constexpr auto HDDL_DEVICE_TAG
 
static constexpr auto HDDL_BIND_DEVICE
 
static constexpr auto HDDL_RUNTIME_PRIORITY
 
static constexpr auto HDDL_USE_SGAD
 
static constexpr auto HDDL_GROUP_DEVICE
 
static constexpr auto MYRIAD_ENABLE_FORCE_RESET
 
static constexpr auto MYRIAD_DDR_TYPE
 
static constexpr auto MYRIAD_DDR_AUTO
 
static constexpr auto MYRIAD_PROTOCOL
 
static constexpr auto MYRIAD_PCIE
 
static constexpr auto MYRIAD_THROUGHPUT_STREAMS
 
static constexpr auto MYRIAD_ENABLE_HW_ACCELERATION
 
static constexpr auto MYRIAD_ENABLE_RECEIVING_TENSOR_TIME
 
static constexpr auto MYRIAD_CUSTOM_LAYERS
 
constexpr static const ExportMagic exportMagic = {{0x1, 0xE, 0xE, 0x1}}
 Magic number used by ie core to identify exported network with plugin name.
 

Detailed Description

Inference Engine Plugin API namespace.

Function Documentation

◆ copyInputOutputInfo()

void InferenceEngine::copyInputOutputInfo ( const InputsDataMap networkInputs,
const OutputsDataMap networkOutputs,
InputsDataMap _networkInputs,
OutputsDataMap _networkOutputs 
)
inline

Copies InputInfo and output Data.

Parameters
[in]networkInputsThe network inputs to copy from
[in]networkOutputsThe network outputs to copy from
_networkInputsThe network inputs to copy to
_networkOutputsThe network outputs to copy to

◆ copyPreProcess()

static void InferenceEngine::copyPreProcess ( const PreProcessInfo from,
PreProcessInfo to 
)
static

Copies preprocess info.

Parameters
[in]fromPreProcessInfo to copy from
toPreProcessInfo to copy to

◆ CurrentException()

std::exception_ptr& InferenceEngine::CurrentException ( )

Provides the reference to static thread_local std::exception_ptr.

Returns
A an exception pointer