Namespaces | Data Structures | Typedefs | Enumerations | Functions | Variables
InferenceEngine Namespace Reference

Inference Engine C++ API. More...

Namespaces

 CLDNNConfigParams
 GPU plugin configuration.
 
 GNAConfigParams
 GNA plugin configuration.
 
 HeteroConfigParams
 Heterogeneous plugin configuration.
 
 Metrics
 Metrics
 
 MultiDeviceConfigParams
 Multi Device plugin configuration.
 
 PluginConfigParams
 Generic plugin configuration.
 
 VPUConfigParams
 VPU plugin configuration.
 

Data Structures

class  CNNNetwork
 This class contains all the information about the Neural Network and the related binary information. More...
 
class  ExecutableNetwork
 wrapper over IExecutableNetwork More...
 
class  InferRequest
 This is an interface of asynchronous infer request. More...
 
class  VariableState
 C++ exception based error reporting wrapper of API class IVariableState. More...
 
interface  IAllocator
 Allocator concept to be used for memory management and is used as part of the Blob. More...
 
class  Blob
 This class represents a universal container in the Inference Engine. More...
 
class  MemoryBlob
 This class implements a container object that represents a tensor in memory (host and remote/accelerated) More...
 
class  TBlob
 Represents real host memory allocated for a Tensor/Blob per C type. More...
 
union  UserValue
 The method holds the user values to enable binding of data per graph node. More...
 
struct  InferenceEngineProfileInfo
 Represents basic inference profiling information per layer. More...
 
struct  ResponseDesc
 Represents detailed information for an error. More...
 
struct  QueryNetworkResult
 Response structure encapsulating information about supported layer. More...
 
class  GeneralError
 This class represents StatusCode::GENERIC_ERROR exception. More...
 
class  NotImplemented
 This class represents StatusCode::NOT_IMPLEMENTED exception. More...
 
class  NetworkNotLoaded
 This class represents StatusCode::NETWORK_NOT_LOADED exception. More...
 
class  ParameterMismatch
 This class represents StatusCode::PARAMETER_MISMATCH exception. More...
 
class  NotFound
 This class represents StatusCode::NOT_FOUND exception. More...
 
class  OutOfBounds
 This class represents StatusCode::OUT_OF_BOUNDS exception. More...
 
class  Unexpected
 This class represents StatusCode::UNEXPECTED exception. More...
 
class  RequestBusy
 This class represents StatusCode::REQUEST_BUSY exception. More...
 
class  ResultNotReady
 This class represents StatusCode::RESULT_NOT_READY exception. More...
 
class  NotAllocated
 This class represents StatusCode::NOT_ALLOCATED exception. More...
 
class  InferNotStarted
 This class represents StatusCode::INFER_NOT_STARTED exception. More...
 
class  NetworkNotRead
 This class represents StatusCode::NETWORK_NOT_READ exception. More...
 
class  InferCancelled
 This class represents StatusCode::INFER_CANCELLED exception. More...
 
class  CompoundBlob
 This class represents a blob that contains other blobs. More...
 
class  NV12Blob
 Represents a blob that contains two planes (Y and UV) in NV12 color format. More...
 
class  I420Blob
 Represents a blob that contains three planes (Y,U and V) in I420 color format. More...
 
class  BatchedBlob
 This class represents a blob that contains other blobs - one per batch. More...
 
class  Core
 This class represents Inference Engine Core entity. More...
 
class  Data
 This class represents the main Data representation node. More...
 
class  Extension
 This class is a C++ helper to work with objects created using extensions. More...
 
interface  ICNNNetwork
 This is the main interface to describe the NN topology. More...
 
class  IExecutableNetwork
 This is an interface of an executable network. More...
 
struct  DataConfig
 This structure describes data configuration. More...
 
struct  LayerConfig
 This structure describes Layer configuration. More...
 
interface  ILayerImpl
 This class provides interface for extension implementations. More...
 
interface  ILayerExecImpl
 This class provides interface for the implementation with the custom execution code. More...
 
class  IExtension
 This class is the main extension interface. More...
 
class  IInferRequest
 This is an interface of asynchronous infer request. More...
 
interface  IVariableState
 Manages data for reset operations. More...
 
class  InputInfo
 This class contains information about each input of the network. More...
 
class  BlockingDesc
 This class describes blocking layouts. More...
 
class  TensorDesc
 This class defines Tensor description. More...
 
struct  ROI
 This structure describes ROI data for image-like tensors. More...
 
class  LockedMemory
 This class represents locked memory for read/write memory. More...
 
class  LockedMemory< void >
 This class is for <void*> data and allows casting to any pointers. More...
 
class  LockedMemory< const T >
 This class is for read-only segments. More...
 
class  Parameter
 This class represents an object to work with different parameters. More...
 
class  Precision
 This class holds precision value and provides precision related operations. More...
 
struct  PrecisionTrait
 Particular precision traits. More...
 
struct  PreProcessChannel
 This structure stores info about pre-processing of network inputs (scale, mean image, ...) More...
 
class  PreProcessInfo
 This class stores pre-process information for the input. More...
 
class  RemoteBlob
 This class represents an Inference Engine abstraction to the memory allocated on the remote (non-CPU) accelerator device. More...
 
class  RemoteContext
 This class represents an Inference Engine abstraction for remote (non-CPU) accelerator device-specific execution context. Such context represents a scope on the device within which executable networks and remote memory blobs can exist, function and exchange data. More...
 
struct  Version
 Represents version information that describes plugins and the inference engine runtime library. More...
 

Typedefs

using MemoryState = VariableState
 For compatibility reasons.
 
using gpu_handle_param = void *
 Shortcut for defining a handle parameter.
 
using BlobMap = std::map< std::string, Blob::Ptr >
 This is a convenient type for working with a map containing pairs(string, pointer to a Blob instance).
 
using SizeVector = std::vector< size_t >
 Represents tensor size. More...
 
using DataPtr = std::shared_ptr< Data >
 Smart pointer to Data.
 
using CDataPtr = std::shared_ptr< const Data >
 Smart pointer to constant Data.
 
using DataWeakPtr = std::weak_ptr< Data >
 Smart weak pointer to Data.
 
using OutputsDataMap = std::map< std::string, DataPtr >
 A collection that contains string as key, and Data smart pointer as value.
 
using ConstOutputsDataMap = std::map< std::string, CDataPtr >
 A collection that contains string as key, and const Data smart pointer as value.
 
using IExtensionPtr = std::shared_ptr< IExtension >
 A shared pointer to a IExtension interface.
 
using IMemoryState = IVariableState
 For compatibility reasons.
 
using InputsDataMap = std::map< std::string, InputInfo::Ptr >
 A collection that contains string as key, and InputInfo smart pointer as value.
 
using ConstInputsDataMap = std::map< std::string, InputInfo::CPtr >
 A collection that contains string as key, and const InputInfo smart pointer as value.
 
using ParamMap = std::map< std::string, Parameter >
 An std::map object containing low-level object parameters of classes that are derived from RemoteBlob or RemoteContext.
 

Enumerations

enum  LockOp { LOCK_FOR_READ = 0 , LOCK_FOR_WRITE }
 Allocator handle mapping type. More...
 
enum  Layout : uint8_t {
  ANY = 0 , NCHW = 1 , NHWC = 2 , NCDHW = 3 ,
  NDHWC = 4 , OIHW = 64 , GOIHW = 65 , OIDHW = 66 ,
  GOIDHW = 67 , SCALAR = 95 , C = 96 , CHW = 128 ,
  HWC = 129 , HW = 192 , NC = 193 , CN = 194 ,
  BLOCKED = 200
}
 Layouts that the inference engine supports. More...
 
enum  ColorFormat : uint32_t {
  RAW = 0u , RGB , BGR , RGBX ,
  BGRX , NV12 , I420
}
 Extra information about input color format for preprocessing. More...
 
enum  StatusCode : int {
  OK = 0 , GENERAL_ERROR = -1 , NOT_IMPLEMENTED = -2 , NETWORK_NOT_LOADED = -3 ,
  PARAMETER_MISMATCH = -4 , NOT_FOUND = -5 , OUT_OF_BOUNDS = -6 , UNEXPECTED = -7 ,
  REQUEST_BUSY = -8 , RESULT_NOT_READY = -9 , NOT_ALLOCATED = -10 , INFER_NOT_STARTED = -11 ,
  NETWORK_NOT_READ = -12 , INFER_CANCELLED = -13
}
 This enum contains codes for all possible return values of the interface functions.
 
enum  MeanVariant { MEAN_IMAGE , MEAN_VALUE , NONE }
 Defines available types of mean. More...
 
enum  ResizeAlgorithm { NO_RESIZE = 0 , RESIZE_BILINEAR , RESIZE_AREA }
 Represents the list of supported resize algorithms.
 

Functions

template<class T >
std::shared_ptr< T > make_so_pointer (const std::string &name)=delete
 Creates a special shared_pointer wrapper for the given type from a specific shared module. More...
 
template<class T >
std::shared_ptr< T > make_so_pointer (const std::wstring &name)=delete
 
InferenceEngine::IAllocatorCreateDefaultAllocator () noexcept
 Creates the default implementation of the Inference Engine allocator per plugin. More...
 
template<typename T , typename std::enable_if<!std::is_pointer< T >::value &&!std::is_reference< T >::value, int >::type = 0, typename std::enable_if< std::is_base_of< Blob, T >::value, int >::type = 0>
std::shared_ptr< T > as (const Blob::Ptr &blob) noexcept
 Helper cast function to work with shared Blob objects. More...
 
template<typename T , typename std::enable_if<!std::is_pointer< T >::value &&!std::is_reference< T >::value, int >::type = 0, typename std::enable_if< std::is_base_of< Blob, T >::value, int >::type = 0>
std::shared_ptr< const T > as (const Blob::CPtr &blob) noexcept
 Helper cast function to work with shared Blob objects. More...
 
template<typename Type >
InferenceEngine::TBlob< Type >::Ptr make_shared_blob (const TensorDesc &tensorDesc)
 Creates a blob with the given tensor descriptor. More...
 
template<typename Type >
InferenceEngine::TBlob< Type >::Ptr make_shared_blob (const TensorDesc &tensorDesc, Type *ptr, size_t size=0)
 Creates a blob with the given tensor descriptor from the pointer to the pre-allocated memory. More...
 
template<typename Type >
InferenceEngine::TBlob< Type >::Ptr make_shared_blob (const TensorDesc &tensorDesc, const std::shared_ptr< InferenceEngine::IAllocator > &alloc)
 Creates a blob with the given tensor descriptor and allocator. More...
 
template<typename TypeTo >
InferenceEngine::TBlob< TypeTo >::Ptr make_shared_blob (const TBlob< TypeTo > &arg)
 Creates a copy of given TBlob instance. More...
 
template<typename T , typename... Args, typename std::enable_if< std::is_base_of< Blob, T >::value, int >::type = 0>
std::shared_ptr< T > make_shared_blob (Args &&... args)
 Creates a Blob object of the specified type. More...
 
Blob::Ptr make_shared_blob (const Blob::Ptr &inputBlob, const ROI &roi)
 Creates a blob describing given ROI object based on the given blob with pre-allocated memory. More...
 
std::ostream & operator<< (std::ostream &out, const Layout &p)
 Prints a string representation of InferenceEngine::Layout to a stream. More...
 
std::ostream & operator<< (std::ostream &out, const ColorFormat &fmt)
 Prints a string representation of InferenceEngine::ColorFormat to a stream. More...
 
template<>
std::shared_ptr< IExtensionmake_so_pointer (const std::string &name)
 Creates a special shared_pointer wrapper for the given type from a specific shared module. More...
 
StatusCode CreateExtension (IExtension *&ext, ResponseDesc *resp) noexcept
 Creates the default instance of the extension. More...
 
TensorDesc make_roi_desc (const TensorDesc &origDesc, const ROI &roi, bool useOrigMemDesc)
 Creates a TensorDesc object for ROI. More...
 
template<typename F >
void parallel_nt (int nthr, const F &func)
 
template<typename F >
void parallel_nt_static (int nthr, const F &func)
 
template<typename I , typename F >
void parallel_sort (I begin, I end, const F &comparator)
 
template<typename T0 , typename R , typename F >
parallel_sum (const T0 &D0, const R &input, const F &func)
 
template<typename T0 , typename T1 , typename R , typename F >
parallel_sum2d (const T0 &D0, const T1 &D1, const R &input, const F &func)
 
template<typename T0 , typename T1 , typename T2 , typename R , typename F >
parallel_sum3d (const T0 &D0, const T1 &D1, const T2 &D2, const R &input, const F &func)
 
template<typename T >
parallel_it_init (T start)
 
template<typename T , typename Q , typename R , typename... Args>
parallel_it_init (T start, Q &x, const R &X, Args &&... tuple)
 
bool parallel_it_step ()
 
template<typename Q , typename R , typename... Args>
bool parallel_it_step (Q &x, const R &X, Args &&... tuple)
 
template<typename T , typename Q >
void splitter (const T &n, const Q &team, const Q &tid, T &n_start, T &n_end)
 
template<typename T0 , typename F >
void for_1d (const int &ithr, const int &nthr, const T0 &D0, const F &func)
 
template<typename T0 , typename F >
void parallel_for (const T0 &D0, const F &func)
 
template<typename T0 , typename T1 , typename F >
void for_2d (const int &ithr, const int &nthr, const T0 &D0, const T1 &D1, const F &func)
 
template<typename T0 , typename T1 , typename F >
void parallel_for2d (const T0 &D0, const T1 &D1, const F &func)
 
template<typename T0 , typename T1 , typename T2 , typename F >
void for_3d (const int &ithr, const int &nthr, const T0 &D0, const T1 &D1, const T2 &D2, const F &func)
 
template<typename T0 , typename T1 , typename T2 , typename F >
void parallel_for3d (const T0 &D0, const T1 &D1, const T2 &D2, const F &func)
 
template<typename T0 , typename T1 , typename T2 , typename T3 , typename F >
void for_4d (const int &ithr, const int &nthr, const T0 &D0, const T1 &D1, const T2 &D2, const T3 &D3, const F &func)
 
template<typename T0 , typename T1 , typename T2 , typename T3 , typename F >
void parallel_for4d (const T0 &D0, const T1 &D1, const T2 &D2, const T3 &D3, const F &func)
 
template<typename T0 , typename T1 , typename T2 , typename T3 , typename T4 , typename F >
void for_5d (const int &ithr, const int &nthr, const T0 &D0, const T1 &D1, const T2 &D2, const T3 &D3, const T4 &D4, const F &func)
 
template<typename T0 , typename T1 , typename T2 , typename T3 , typename T4 , typename F >
void parallel_for5d (const T0 &D0, const T1 &D1, const T2 &D2, const T3 &D3, const T4 &D4, const F &func)
 
RemoteBlob::Ptr make_shared_blob (const TensorDesc &desc, RemoteContext::Ptr ctx)
 A wrapper of CreateBlob method of RemoteContext to keep consistency with plugin-specific wrappers. More...
 
void LowLatency (InferenceEngine::CNNNetwork &network)
 The transformation finds all TensorIterator layers in the network, processes all back edges that describe a connection between Result and Parameter of the TensorIterator body, and inserts ReadValue layer between Parameter and the next layers after this Parameter, and Assign layer after the layers before the Result layer. Supported platforms: CPU, GNA. More...
 
std::string fileNameToString (const file_name_t &str)
 Conversion from possibly-wide character string to a single-byte chain. More...
 
file_name_t stringToFileName (const std::string &str)
 Conversion from single-byte character string to a possibly-wide one. More...
 
const VersionGetInferenceEngineVersion () noexcept
 Gets the current Inference Engine version. More...
 

Variables

static constexpr auto HDDL_GRAPH_TAG = "HDDL_GRAPH_TAG"
 [Only for HDDLPlugin] Type: Arbitrary non-empty string. If empty (""), equals no set, default: ""; This option allows to specify the number of MYX devices used for inference a specific Executable network. Note: Only one network would be allocated to one device. The number of devices for the tag is specified in the hddl_service.config file. Example: "service_settings": { "graph_tag_map": { "tagA":3 } } It means that an executable network marked with tagA will be executed on 3 devices
 
static constexpr auto HDDL_STREAM_ID = "HDDL_STREAM_ID"
 [Only for HDDLPlugin] Type: Arbitrary non-empty string. If empty (""), equals no set, default: ""; This config makes the executable networks to be allocated on one certain device (instead of multiple devices). And all inference through this executable network, will be done on this device. Note: Only one network would be allocated to one device. The number of devices which will be used for stream-affinity must be specified in hddl_service.config file. Example: "service_settings": { "stream_device_number":5 } It means that 5 device will be used for stream-affinity
 
static constexpr auto HDDL_DEVICE_TAG = "HDDL_DEVICE_TAG"
 [Only for HDDLPlugin] Type: Arbitrary non-empty string. If empty (""), equals no set, default: ""; This config allows user to control device flexibly. This config gives a "tag" for a certain device while allocating a network to it. Afterward, user can allocating/deallocating networks to this device with this "tag". Devices used for such use case is controlled by a so-called "Bypass Scheduler" in HDDL backend, and the number of such device need to be specified in hddl_service.config file. Example: "service_settings": { "bypass_device_number": 5 } It means that 5 device will be used for Bypass scheduler.
 
static constexpr auto HDDL_BIND_DEVICE = "HDDL_BIND_DEVICE"
 [Only for HDDLPlugin] Type: "YES/NO", default is "NO". This config is a sub-config of DEVICE_TAG, and only available when "DEVICE_TAG" is set. After a user load a network, the user got a handle for the network. If "YES", the network allocated is bind to the device (with the specified "DEVICE_TAG"), which means all afterwards inference through this network handle will be executed on this device only. If "NO", the network allocated is not bind to the device (with the specified "DEVICE_TAG"). If the same network is allocated on multiple other devices (also set BIND_DEVICE to "False"), then inference through any handle of these networks may be executed on any of these devices those have the network loaded.
 
static constexpr auto HDDL_RUNTIME_PRIORITY = "HDDL_RUNTIME_PRIORITY"
 [Only for HDDLPlugin] Type: A signed int wrapped in a string, default is "0". This config is a sub-config of DEVICE_TAG, and only available when "DEVICE_TAG" is set and "BIND_DEVICE" is "False". When there are multiple devices running a certain network (a same network running on multiple devices in Bypass Scheduler), the device with a larger number has a higher priority, and more inference tasks will be fed to it with priority.
 
static constexpr auto HDDL_USE_SGAD = "HDDL_USE_SGAD"
 [Only for HDDLPlugin] Type: "YES/NO", default is "NO". SGAD is short for "Single Graph All Device". With this scheduler, once application allocates 1 network, all devices (managed by SGAD scheduler) will be loaded with this graph. The number of network that can be loaded to one device can exceed one. Once application deallocates 1 network from device, all devices will unload the network from them.
 
static constexpr auto HDDL_GROUP_DEVICE = "HDDL_GROUP_DEVICE"
 [Only for HDDLPlugin] Type: A signed int wrapped in a string, default is "0". This config gives a "group id" for a certain device when this device has been reserved for certain client, client can use this device grouped by calling this group id while other client can't use this device Each device has their own group id. Device in one group shares same group id.
 
static constexpr auto MYRIAD_ENABLE_FORCE_RESET = "MYRIAD_ENABLE_FORCE_RESET"
 The flag to reset stalled devices. This is a plugin scope option and must be used with the plugin's SetConfig method The only possible values are: CONFIG_VALUE(YES) CONFIG_VALUE(NO) (default value)
 
static constexpr auto MYRIAD_DDR_TYPE = "MYRIAD_DDR_TYPE"
 This option allows to specify device memory type.
 
static constexpr auto MYRIAD_DDR_AUTO = "MYRIAD_DDR_AUTO"
 Supported keys definition for InferenceEngine::MYRIAD_DDR_TYPE option.
 
static constexpr auto MYRIAD_DDR_MICRON_2GB = "MYRIAD_DDR_MICRON_2GB"
 
static constexpr auto MYRIAD_DDR_SAMSUNG_2GB = "MYRIAD_DDR_SAMSUNG_2GB"
 
static constexpr auto MYRIAD_DDR_HYNIX_2GB = "MYRIAD_DDR_HYNIX_2GB"
 
static constexpr auto MYRIAD_DDR_MICRON_1GB = "MYRIAD_DDR_MICRON_1GB"
 
static constexpr auto MYRIAD_PROTOCOL = "MYRIAD_PROTOCOL"
 This option allows to specify protocol.
 
static constexpr auto MYRIAD_PCIE = "MYRIAD_PCIE"
 Supported keys definition for InferenceEngine::MYRIAD_PROTOCOL option.
 
static constexpr auto MYRIAD_USB = "MYRIAD_USB"
 
static constexpr auto MYRIAD_THROUGHPUT_STREAMS = "MYRIAD_THROUGHPUT_STREAMS"
 Optimize vpu plugin execution to maximize throughput. This option should be used with integer value which is the requested number of streams. The only possible values are: 1 2 3.
 
static constexpr auto MYRIAD_ENABLE_HW_ACCELERATION = "MYRIAD_ENABLE_HW_ACCELERATION"
 Turn on HW stages usage (applicable for MyriadX devices only). The only possible values are: CONFIG_VALUE(YES) (default value) CONFIG_VALUE(NO)
 
static constexpr auto MYRIAD_ENABLE_RECEIVING_TENSOR_TIME = "MYRIAD_ENABLE_RECEIVING_TENSOR_TIME"
 The flag for adding to the profiling information the time of obtaining a tensor. The only possible values are: CONFIG_VALUE(YES) CONFIG_VALUE(NO) (default value)
 
static constexpr auto MYRIAD_CUSTOM_LAYERS = "MYRIAD_CUSTOM_LAYERS"
 This option allows to pass custom layers binding xml. If layer is present in such an xml, it would be used during inference even if the layer is natively supported.
 

Detailed Description

Inference Engine C++ API.

Typedef Documentation

◆ SizeVector

using InferenceEngine::SizeVector = typedef std::vector<size_t>

Represents tensor size.

The order is opposite to the order in Caffe*: (w,h,n,b) where the most frequently changing element in memory is first.

Enumeration Type Documentation

◆ ColorFormat

Extra information about input color format for preprocessing.

Enumerator
RAW 

Plain blob (default), no extra color processing required.

RGB 

RGB color format.

BGR 

BGR color format, default in DLDT.

RGBX 

RGBX color format with X ignored during inference.

BGRX 

BGRX color format with X ignored during inference.

NV12 

NV12 color format represented as compound Y+UV blob.

I420 

I420 color format represented as compound Y+U+V blob.

◆ Layout

enum InferenceEngine::Layout : uint8_t

Layouts that the inference engine supports.

Enumerator
ANY 

"any" layout

NCHW 

NCHW layout for input / output blobs.

NHWC 

NHWC layout for input / output blobs.

NCDHW 

NCDHW layout for input / output blobs.

NDHWC 

NDHWC layout for input / output blobs.

OIHW 

NDHWC layout for operation weights.

GOIHW 

NDHWC layout for operation weights.

OIDHW 

NDHWC layout for operation weights.

GOIDHW 

NDHWC layout for operation weights.

SCALAR 

A scalar layout.

A bias layout for operation.

CHW 

A single image layout (e.g. for mean image)

HWC 

A single image layout (e.g. for mean image)

HW 

HW 2D layout.

NC 

HC 2D layout.

CN 

CN 2D layout.

BLOCKED 

A blocked layout.

◆ LockOp

Allocator handle mapping type.

Enumerator
LOCK_FOR_READ 

A flag to lock data for read.

LOCK_FOR_WRITE 

A flag to lock data for write.

◆ MeanVariant

Defines available types of mean.

Enumerator
MEAN_IMAGE 

mean value is specified for each input pixel

MEAN_VALUE 

mean value is specified for each input channel

NONE 

no mean value specified

Function Documentation

◆ as() [1/2]

template<typename T , typename std::enable_if<!std::is_pointer< T >::value &&!std::is_reference< T >::value, int >::type = 0, typename std::enable_if< std::is_base_of< Blob, T >::value, int >::type = 0>
std::shared_ptr<const T> InferenceEngine::as ( const Blob::CPtr blob)
noexcept

Helper cast function to work with shared Blob objects.

Parameters
blobA blob to cast
Returns
shared_ptr to the type const T. Returned shared_ptr shares ownership of the object with the input Blob::Ptr

◆ as() [2/2]

template<typename T , typename std::enable_if<!std::is_pointer< T >::value &&!std::is_reference< T >::value, int >::type = 0, typename std::enable_if< std::is_base_of< Blob, T >::value, int >::type = 0>
std::shared_ptr<T> InferenceEngine::as ( const Blob::Ptr blob)
noexcept

Helper cast function to work with shared Blob objects.

Parameters
blobA blob to cast
Returns
shared_ptr to the type T. Returned shared_ptr shares ownership of the object with the input Blob::Ptr

◆ CreateDefaultAllocator()

InferenceEngine::IAllocator* InferenceEngine::CreateDefaultAllocator ( )
noexcept

Creates the default implementation of the Inference Engine allocator per plugin.

Returns
The Inference Engine IAllocator* instance

◆ CreateExtension()

StatusCode InferenceEngine::CreateExtension ( IExtension *&  ext,
ResponseDesc resp 
)
noexcept

Creates the default instance of the extension.

Parameters
extExtension interface
respResponse description
Returns
Status code

◆ fileNameToString()

std::string InferenceEngine::fileNameToString ( const file_name_t &  str)
inline

Conversion from possibly-wide character string to a single-byte chain.

Deprecated:
Use OS-native conversion utilities
Parameters
strA possibly-wide character string
Returns
A single-byte character string

◆ GetInferenceEngineVersion()

const Version* InferenceEngine::GetInferenceEngineVersion ( )
noexcept

Gets the current Inference Engine version.

Returns
The current Inference Engine version

◆ LowLatency()

void InferenceEngine::LowLatency ( InferenceEngine::CNNNetwork network)

The transformation finds all TensorIterator layers in the network, processes all back edges that describe a connection between Result and Parameter of the TensorIterator body, and inserts ReadValue layer between Parameter and the next layers after this Parameter, and Assign layer after the layers before the Result layer. Supported platforms: CPU, GNA.

The example below describes the changes to the inner part (body, back edges) of the TensorIterator layer. [] - TensorIterator body () - new layer

before applying the transformation: back_edge_1 -> [Parameter -> some layers ... -> Result ] -> back_edge_1

after applying the transformation: back_edge_1 -> [Parameter -> (ReadValue layer) -> some layers ... -> (Assign layer) ] \ -> Result ] -> back_edge_1

It is recommended to use this transformation in conjunction with the Reshape feature to set sequence dimension to 1 and with the UnrollTensorIterator transformation. For convenience, we have already enabled the unconditional execution of the UnrollTensorIterator transformation when using the LowLatency transformation for CPU, GNA plugins, no action is required here. After applying both of these transformations, the resulting network can be inferred step by step, the states will store between inferences.

An illustrative example, not real API:

network->reshape(...) // Set sequence dimension to 1, recalculating shapes. Optional, depends on the network. LowLatency(network) // Applying LowLatency and UnrollTensorIterator transformations. network->infer (...) // Calculating new values for states. // All states are stored between inferences via Assign, ReadValue layers. network->infer (...) // Using stored states, calculating new values for states.

Parameters
networkA network to apply LowLatency transformation

◆ make_roi_desc()

TensorDesc InferenceEngine::make_roi_desc ( const TensorDesc origDesc,
const ROI roi,
bool  useOrigMemDesc 
)

Creates a TensorDesc object for ROI.

Parameters
origDescoriginal TensorDesc object.
roiAn image ROI object inside of the original object.
useOrigMemDescFlag to use original memory description (strides/offset). Should be set if the new TensorDesc describes shared memory.
Returns
A newly created TensorDesc object representing ROI.

◆ make_shared_blob() [1/7]

template<typename T , typename... Args, typename std::enable_if< std::is_base_of< Blob, T >::value, int >::type = 0>
std::shared_ptr<T> InferenceEngine::make_shared_blob ( Args &&...  args)

Creates a Blob object of the specified type.

Parameters
argsConstructor arguments for the Blob object
Returns
A shared pointer to the newly created Blob object

◆ make_shared_blob() [2/7]

Blob::Ptr InferenceEngine::make_shared_blob ( const Blob::Ptr inputBlob,
const ROI roi 
)

Creates a blob describing given ROI object based on the given blob with pre-allocated memory.

Parameters
inputBloboriginal blob with pre-allocated memory.
roiA ROI object inside of the original blob.
Returns
A shared pointer to the newly created blob.

◆ make_shared_blob() [3/7]

template<typename TypeTo >
InferenceEngine::TBlob<TypeTo>::Ptr InferenceEngine::make_shared_blob ( const TBlob< TypeTo > &  arg)
inline

Creates a copy of given TBlob instance.

Template Parameters
TypeToType of the shared pointer to be created
Parameters
arggiven pointer to blob
Returns
A shared pointer to the newly created blob of the given type

◆ make_shared_blob() [4/7]

RemoteBlob::Ptr InferenceEngine::make_shared_blob ( const TensorDesc desc,
RemoteContext::Ptr  ctx 
)
inline

A wrapper of CreateBlob method of RemoteContext to keep consistency with plugin-specific wrappers.

Parameters
descDefines the layout and dims of the blob
ctxPointer to the plugin object derived from RemoteContext.
Returns
A pointer to plugin object that implements RemoteBlob interface.

◆ make_shared_blob() [5/7]

template<typename Type >
InferenceEngine::TBlob<Type>::Ptr InferenceEngine::make_shared_blob ( const TensorDesc tensorDesc)
inline

Creates a blob with the given tensor descriptor.

Template Parameters
TypeType of the shared pointer to be created
Parameters
tensorDescTensor descriptor for Blob creation
Returns
A shared pointer to the newly created blob of the given type

◆ make_shared_blob() [6/7]

template<typename Type >
InferenceEngine::TBlob<Type>::Ptr InferenceEngine::make_shared_blob ( const TensorDesc tensorDesc,
const std::shared_ptr< InferenceEngine::IAllocator > &  alloc 
)
inline

Creates a blob with the given tensor descriptor and allocator.

Template Parameters
TypeType of the shared pointer to be created
Parameters
tensorDescTensor descriptor for Blob creation
allocShared pointer to IAllocator to use in the blob
Returns
A shared pointer to the newly created blob of the given type

◆ make_shared_blob() [7/7]

template<typename Type >
InferenceEngine::TBlob<Type>::Ptr InferenceEngine::make_shared_blob ( const TensorDesc tensorDesc,
Type *  ptr,
size_t  size = 0 
)
inline

Creates a blob with the given tensor descriptor from the pointer to the pre-allocated memory.

Template Parameters
TypeType of the shared pointer to be created
Parameters
tensorDescTensorDesc for Blob creation
ptrPointer to the pre-allocated memory
sizeLength of the pre-allocated array
Returns
A shared pointer to the newly created blob of the given type

◆ make_so_pointer() [1/2]

template<>
std::shared_ptr<IExtension> InferenceEngine::make_so_pointer ( const std::string &  name)
inlinedelete

Creates a special shared_pointer wrapper for the given type from a specific shared module.

Parameters
nameA std::string name of the shared library file
Returns
shared_pointer A wrapper for the given type from a specific shared module

◆ make_so_pointer() [2/2]

template<class T >
std::shared_ptr<T> InferenceEngine::make_so_pointer ( const std::string &  name)
inlinedelete

Creates a special shared_pointer wrapper for the given type from a specific shared module.

Template Parameters
TAn type of object SOPointer can hold
Parameters
nameName of the shared library file
Returns
A created object
Parameters
nameA std::string name of the shared library file
Returns
shared_pointer A wrapper for the given type from a specific shared module

◆ operator<<() [1/2]

std::ostream& InferenceEngine::operator<< ( std::ostream &  out,
const ColorFormat fmt 
)
inline

Prints a string representation of InferenceEngine::ColorFormat to a stream.

Parameters
outAn output stream to send to
fmtA color format value to print to a stream
Returns
A reference to the out stream

◆ operator<<() [2/2]

std::ostream& InferenceEngine::operator<< ( std::ostream &  out,
const Layout p 
)
inline

Prints a string representation of InferenceEngine::Layout to a stream.

Parameters
outAn output stream to send to
pA layout value to print to a stream
Returns
A reference to the out stream

◆ stringToFileName()

file_name_t InferenceEngine::stringToFileName ( const std::string &  str)
inline

Conversion from single-byte character string to a possibly-wide one.

Deprecated:
Use OS-native conversion utilities
Parameters
strA single-byte character string
Returns
A possibly-wide character string