Inference Engine C++ API. More...
Namespaces | |
CLDNNConfigParams | |
GPU plugin configuration. | |
GNAConfigParams | |
GNA plugin configuration. | |
HeteroConfigParams | |
Heterogeneous plugin configuration. | |
Metrics | |
Metrics | |
MultiDeviceConfigParams | |
Multi Device plugin configuration. | |
PluginConfigParams | |
Generic plugin configuration. | |
VPUConfigParams | |
VPU plugin configuration. | |
Data Structures | |
class | CNNNetwork |
This class contains all the information about the Neural Network and the related binary information. More... | |
class | ExecutableNetwork |
wrapper over IExecutableNetwork More... | |
class | InferRequest |
This is an interface of asynchronous infer request. More... | |
class | VariableState |
C++ exception based error reporting wrapper of API class IVariableState. More... | |
interface | IAllocator |
Allocator concept to be used for memory management and is used as part of the Blob. More... | |
class | Blob |
This class represents a universal container in the Inference Engine. More... | |
class | MemoryBlob |
This class implements a container object that represents a tensor in memory (host and remote/accelerated) More... | |
class | TBlob |
Represents real host memory allocated for a Tensor/Blob per C type. More... | |
union | UserValue |
The method holds the user values to enable binding of data per graph node. More... | |
struct | InferenceEngineProfileInfo |
Represents basic inference profiling information per layer. More... | |
struct | ResponseDesc |
Represents detailed information for an error. More... | |
struct | QueryNetworkResult |
Response structure encapsulating information about supported layer. More... | |
class | GeneralError |
This class represents StatusCode::GENERIC_ERROR exception. More... | |
class | NotImplemented |
This class represents StatusCode::NOT_IMPLEMENTED exception. More... | |
class | NetworkNotLoaded |
This class represents StatusCode::NETWORK_NOT_LOADED exception. More... | |
class | ParameterMismatch |
This class represents StatusCode::PARAMETER_MISMATCH exception. More... | |
class | NotFound |
This class represents StatusCode::NOT_FOUND exception. More... | |
class | OutOfBounds |
This class represents StatusCode::OUT_OF_BOUNDS exception. More... | |
class | Unexpected |
This class represents StatusCode::UNEXPECTED exception. More... | |
class | RequestBusy |
This class represents StatusCode::REQUEST_BUSY exception. More... | |
class | ResultNotReady |
This class represents StatusCode::RESULT_NOT_READY exception. More... | |
class | NotAllocated |
This class represents StatusCode::NOT_ALLOCATED exception. More... | |
class | InferNotStarted |
This class represents StatusCode::INFER_NOT_STARTED exception. More... | |
class | NetworkNotRead |
This class represents StatusCode::NETWORK_NOT_READ exception. More... | |
class | InferCancelled |
This class represents StatusCode::INFER_CANCELLED exception. More... | |
class | CompoundBlob |
This class represents a blob that contains other blobs. More... | |
class | NV12Blob |
Represents a blob that contains two planes (Y and UV) in NV12 color format. More... | |
class | I420Blob |
Represents a blob that contains three planes (Y,U and V) in I420 color format. More... | |
class | BatchedBlob |
This class represents a blob that contains other blobs - one per batch. More... | |
class | Core |
This class represents Inference Engine Core entity. More... | |
class | Data |
This class represents the main Data representation node. More... | |
class | Extension |
This class is a C++ helper to work with objects created using extensions. More... | |
interface | ICNNNetwork |
This is the main interface to describe the NN topology. More... | |
class | IExecutableNetwork |
This is an interface of an executable network. More... | |
struct | DataConfig |
This structure describes data configuration. More... | |
struct | LayerConfig |
This structure describes Layer configuration. More... | |
interface | ILayerImpl |
This class provides interface for extension implementations. More... | |
interface | ILayerExecImpl |
This class provides interface for the implementation with the custom execution code. More... | |
class | IExtension |
This class is the main extension interface. More... | |
class | IInferRequest |
This is an interface of asynchronous infer request. More... | |
interface | IVariableState |
Manages data for reset operations. More... | |
class | InputInfo |
This class contains information about each input of the network. More... | |
class | BlockingDesc |
This class describes blocking layouts. More... | |
class | TensorDesc |
This class defines Tensor description. More... | |
struct | ROI |
This structure describes ROI data for image-like tensors. More... | |
class | LockedMemory |
This class represents locked memory for read/write memory. More... | |
class | LockedMemory< void > |
This class is for <void*> data and allows casting to any pointers. More... | |
class | LockedMemory< const T > |
This class is for read-only segments. More... | |
class | Parameter |
This class represents an object to work with different parameters. More... | |
class | Precision |
This class holds precision value and provides precision related operations. More... | |
struct | PrecisionTrait |
Particular precision traits. More... | |
struct | PreProcessChannel |
This structure stores info about pre-processing of network inputs (scale, mean image, ...) More... | |
class | PreProcessInfo |
This class stores pre-process information for the input. More... | |
class | RemoteBlob |
This class represents an Inference Engine abstraction to the memory allocated on the remote (non-CPU) accelerator device. More... | |
class | RemoteContext |
This class represents an Inference Engine abstraction for remote (non-CPU) accelerator device-specific execution context. Such context represents a scope on the device within which executable networks and remote memory blobs can exist, function and exchange data. More... | |
struct | Version |
Represents version information that describes plugins and the inference engine runtime library. More... | |
Typedefs | |
using | MemoryState = VariableState |
For compatibility reasons. | |
using | gpu_handle_param = void * |
Shortcut for defining a handle parameter. | |
using | BlobMap = std::map< std::string, Blob::Ptr > |
This is a convenient type for working with a map containing pairs(string, pointer to a Blob instance). | |
using | SizeVector = std::vector< size_t > |
Represents tensor size. More... | |
using | DataPtr = std::shared_ptr< Data > |
Smart pointer to Data. | |
using | CDataPtr = std::shared_ptr< const Data > |
Smart pointer to constant Data. | |
using | DataWeakPtr = std::weak_ptr< Data > |
Smart weak pointer to Data. | |
using | OutputsDataMap = std::map< std::string, DataPtr > |
A collection that contains string as key, and Data smart pointer as value. | |
using | ConstOutputsDataMap = std::map< std::string, CDataPtr > |
A collection that contains string as key, and const Data smart pointer as value. | |
using | IExtensionPtr = std::shared_ptr< IExtension > |
A shared pointer to a IExtension interface. | |
using | IMemoryState = IVariableState |
For compatibility reasons. | |
using | InputsDataMap = std::map< std::string, InputInfo::Ptr > |
A collection that contains string as key, and InputInfo smart pointer as value. | |
using | ConstInputsDataMap = std::map< std::string, InputInfo::CPtr > |
A collection that contains string as key, and const InputInfo smart pointer as value. | |
using | ParamMap = std::map< std::string, Parameter > |
An std::map object containing low-level object parameters of classes that are derived from RemoteBlob or RemoteContext. | |
Enumerations | |
enum | LockOp { LOCK_FOR_READ = 0 , LOCK_FOR_WRITE } |
Allocator handle mapping type. More... | |
enum | Layout : uint8_t { ANY = 0 , NCHW = 1 , NHWC = 2 , NCDHW = 3 , NDHWC = 4 , OIHW = 64 , GOIHW = 65 , OIDHW = 66 , GOIDHW = 67 , SCALAR = 95 , C = 96 , CHW = 128 , HWC = 129 , HW = 192 , NC = 193 , CN = 194 , BLOCKED = 200 } |
Layouts that the inference engine supports. More... | |
enum | ColorFormat : uint32_t { RAW = 0u , RGB , BGR , RGBX , BGRX , NV12 , I420 } |
Extra information about input color format for preprocessing. More... | |
enum | StatusCode : int { OK = 0 , GENERAL_ERROR = -1 , NOT_IMPLEMENTED = -2 , NETWORK_NOT_LOADED = -3 , PARAMETER_MISMATCH = -4 , NOT_FOUND = -5 , OUT_OF_BOUNDS = -6 , UNEXPECTED = -7 , REQUEST_BUSY = -8 , RESULT_NOT_READY = -9 , NOT_ALLOCATED = -10 , INFER_NOT_STARTED = -11 , NETWORK_NOT_READ = -12 , INFER_CANCELLED = -13 } |
This enum contains codes for all possible return values of the interface functions. | |
enum | MeanVariant { MEAN_IMAGE , MEAN_VALUE , NONE } |
Defines available types of mean. More... | |
enum | ResizeAlgorithm { NO_RESIZE = 0 , RESIZE_BILINEAR , RESIZE_AREA } |
Represents the list of supported resize algorithms. | |
Functions | |
template<class T > | |
std::shared_ptr< T > | make_so_pointer (const std::string &name)=delete |
Creates a special shared_pointer wrapper for the given type from a specific shared module. More... | |
template<class T > | |
std::shared_ptr< T > | make_so_pointer (const std::wstring &name)=delete |
InferenceEngine::IAllocator * | CreateDefaultAllocator () noexcept |
Creates the default implementation of the Inference Engine allocator per plugin. More... | |
template<typename T , typename std::enable_if<!std::is_pointer< T >::value &&!std::is_reference< T >::value, int >::type = 0, typename std::enable_if< std::is_base_of< Blob, T >::value, int >::type = 0> | |
std::shared_ptr< T > | as (const Blob::Ptr &blob) noexcept |
Helper cast function to work with shared Blob objects. More... | |
template<typename T , typename std::enable_if<!std::is_pointer< T >::value &&!std::is_reference< T >::value, int >::type = 0, typename std::enable_if< std::is_base_of< Blob, T >::value, int >::type = 0> | |
std::shared_ptr< const T > | as (const Blob::CPtr &blob) noexcept |
Helper cast function to work with shared Blob objects. More... | |
template<typename Type > | |
InferenceEngine::TBlob< Type >::Ptr | make_shared_blob (const TensorDesc &tensorDesc) |
Creates a blob with the given tensor descriptor. More... | |
template<typename Type > | |
InferenceEngine::TBlob< Type >::Ptr | make_shared_blob (const TensorDesc &tensorDesc, Type *ptr, size_t size=0) |
Creates a blob with the given tensor descriptor from the pointer to the pre-allocated memory. More... | |
template<typename Type > | |
InferenceEngine::TBlob< Type >::Ptr | make_shared_blob (const TensorDesc &tensorDesc, const std::shared_ptr< InferenceEngine::IAllocator > &alloc) |
Creates a blob with the given tensor descriptor and allocator. More... | |
template<typename TypeTo > | |
InferenceEngine::TBlob< TypeTo >::Ptr | make_shared_blob (const TBlob< TypeTo > &arg) |
Creates a copy of given TBlob instance. More... | |
template<typename T , typename... Args, typename std::enable_if< std::is_base_of< Blob, T >::value, int >::type = 0> | |
std::shared_ptr< T > | make_shared_blob (Args &&... args) |
Creates a Blob object of the specified type. More... | |
Blob::Ptr | make_shared_blob (const Blob::Ptr &inputBlob, const ROI &roi) |
Creates a blob describing given ROI object based on the given blob with pre-allocated memory. More... | |
std::ostream & | operator<< (std::ostream &out, const Layout &p) |
Prints a string representation of InferenceEngine::Layout to a stream. More... | |
std::ostream & | operator<< (std::ostream &out, const ColorFormat &fmt) |
Prints a string representation of InferenceEngine::ColorFormat to a stream. More... | |
template<> | |
std::shared_ptr< IExtension > | make_so_pointer (const std::string &name) |
Creates a special shared_pointer wrapper for the given type from a specific shared module. More... | |
StatusCode | CreateExtension (IExtension *&ext, ResponseDesc *resp) noexcept |
Creates the default instance of the extension. More... | |
TensorDesc | make_roi_desc (const TensorDesc &origDesc, const ROI &roi, bool useOrigMemDesc) |
Creates a TensorDesc object for ROI. More... | |
template<typename F > | |
void | parallel_nt (int nthr, const F &func) |
template<typename F > | |
void | parallel_nt_static (int nthr, const F &func) |
template<typename I , typename F > | |
void | parallel_sort (I begin, I end, const F &comparator) |
template<typename T0 , typename R , typename F > | |
R | parallel_sum (const T0 &D0, const R &input, const F &func) |
template<typename T0 , typename T1 , typename R , typename F > | |
R | parallel_sum2d (const T0 &D0, const T1 &D1, const R &input, const F &func) |
template<typename T0 , typename T1 , typename T2 , typename R , typename F > | |
R | parallel_sum3d (const T0 &D0, const T1 &D1, const T2 &D2, const R &input, const F &func) |
template<typename T > | |
T | parallel_it_init (T start) |
template<typename T , typename Q , typename R , typename... Args> | |
T | parallel_it_init (T start, Q &x, const R &X, Args &&... tuple) |
bool | parallel_it_step () |
template<typename Q , typename R , typename... Args> | |
bool | parallel_it_step (Q &x, const R &X, Args &&... tuple) |
template<typename T , typename Q > | |
void | splitter (const T &n, const Q &team, const Q &tid, T &n_start, T &n_end) |
template<typename T0 , typename F > | |
void | for_1d (const int &ithr, const int &nthr, const T0 &D0, const F &func) |
template<typename T0 , typename F > | |
void | parallel_for (const T0 &D0, const F &func) |
template<typename T0 , typename T1 , typename F > | |
void | for_2d (const int &ithr, const int &nthr, const T0 &D0, const T1 &D1, const F &func) |
template<typename T0 , typename T1 , typename F > | |
void | parallel_for2d (const T0 &D0, const T1 &D1, const F &func) |
template<typename T0 , typename T1 , typename T2 , typename F > | |
void | for_3d (const int &ithr, const int &nthr, const T0 &D0, const T1 &D1, const T2 &D2, const F &func) |
template<typename T0 , typename T1 , typename T2 , typename F > | |
void | parallel_for3d (const T0 &D0, const T1 &D1, const T2 &D2, const F &func) |
template<typename T0 , typename T1 , typename T2 , typename T3 , typename F > | |
void | for_4d (const int &ithr, const int &nthr, const T0 &D0, const T1 &D1, const T2 &D2, const T3 &D3, const F &func) |
template<typename T0 , typename T1 , typename T2 , typename T3 , typename F > | |
void | parallel_for4d (const T0 &D0, const T1 &D1, const T2 &D2, const T3 &D3, const F &func) |
template<typename T0 , typename T1 , typename T2 , typename T3 , typename T4 , typename F > | |
void | for_5d (const int &ithr, const int &nthr, const T0 &D0, const T1 &D1, const T2 &D2, const T3 &D3, const T4 &D4, const F &func) |
template<typename T0 , typename T1 , typename T2 , typename T3 , typename T4 , typename F > | |
void | parallel_for5d (const T0 &D0, const T1 &D1, const T2 &D2, const T3 &D3, const T4 &D4, const F &func) |
RemoteBlob::Ptr | make_shared_blob (const TensorDesc &desc, RemoteContext::Ptr ctx) |
A wrapper of CreateBlob method of RemoteContext to keep consistency with plugin-specific wrappers. More... | |
void | LowLatency (InferenceEngine::CNNNetwork &network) |
The transformation finds all TensorIterator layers in the network, processes all back edges that describe a connection between Result and Parameter of the TensorIterator body, and inserts ReadValue layer between Parameter and the next layers after this Parameter, and Assign layer after the layers before the Result layer. Supported platforms: CPU, GNA. More... | |
std::string | fileNameToString (const file_name_t &str) |
Conversion from possibly-wide character string to a single-byte chain. More... | |
file_name_t | stringToFileName (const std::string &str) |
Conversion from single-byte character string to a possibly-wide one. More... | |
const Version * | GetInferenceEngineVersion () noexcept |
Gets the current Inference Engine version. More... | |
Variables | |
static constexpr auto | HDDL_GRAPH_TAG = "HDDL_GRAPH_TAG" |
[Only for HDDLPlugin] Type: Arbitrary non-empty string. If empty (""), equals no set, default: ""; This option allows to specify the number of MYX devices used for inference a specific Executable network. Note: Only one network would be allocated to one device. The number of devices for the tag is specified in the hddl_service.config file. Example: "service_settings": { "graph_tag_map": { "tagA":3 } } It means that an executable network marked with tagA will be executed on 3 devices | |
static constexpr auto | HDDL_STREAM_ID = "HDDL_STREAM_ID" |
[Only for HDDLPlugin] Type: Arbitrary non-empty string. If empty (""), equals no set, default: ""; This config makes the executable networks to be allocated on one certain device (instead of multiple devices). And all inference through this executable network, will be done on this device. Note: Only one network would be allocated to one device. The number of devices which will be used for stream-affinity must be specified in hddl_service.config file. Example: "service_settings": { "stream_device_number":5 } It means that 5 device will be used for stream-affinity | |
static constexpr auto | HDDL_DEVICE_TAG = "HDDL_DEVICE_TAG" |
[Only for HDDLPlugin] Type: Arbitrary non-empty string. If empty (""), equals no set, default: ""; This config allows user to control device flexibly. This config gives a "tag" for a certain device while allocating a network to it. Afterward, user can allocating/deallocating networks to this device with this "tag". Devices used for such use case is controlled by a so-called "Bypass Scheduler" in HDDL backend, and the number of such device need to be specified in hddl_service.config file. Example: "service_settings": { "bypass_device_number": 5 } It means that 5 device will be used for Bypass scheduler. | |
static constexpr auto | HDDL_BIND_DEVICE = "HDDL_BIND_DEVICE" |
[Only for HDDLPlugin] Type: "YES/NO", default is "NO". This config is a sub-config of DEVICE_TAG, and only available when "DEVICE_TAG" is set. After a user load a network, the user got a handle for the network. If "YES", the network allocated is bind to the device (with the specified "DEVICE_TAG"), which means all afterwards inference through this network handle will be executed on this device only. If "NO", the network allocated is not bind to the device (with the specified "DEVICE_TAG"). If the same network is allocated on multiple other devices (also set BIND_DEVICE to "False"), then inference through any handle of these networks may be executed on any of these devices those have the network loaded. | |
static constexpr auto | HDDL_RUNTIME_PRIORITY = "HDDL_RUNTIME_PRIORITY" |
[Only for HDDLPlugin] Type: A signed int wrapped in a string, default is "0". This config is a sub-config of DEVICE_TAG, and only available when "DEVICE_TAG" is set and "BIND_DEVICE" is "False". When there are multiple devices running a certain network (a same network running on multiple devices in Bypass Scheduler), the device with a larger number has a higher priority, and more inference tasks will be fed to it with priority. | |
static constexpr auto | HDDL_USE_SGAD = "HDDL_USE_SGAD" |
[Only for HDDLPlugin] Type: "YES/NO", default is "NO". SGAD is short for "Single Graph All Device". With this scheduler, once application allocates 1 network, all devices (managed by SGAD scheduler) will be loaded with this graph. The number of network that can be loaded to one device can exceed one. Once application deallocates 1 network from device, all devices will unload the network from them. | |
static constexpr auto | HDDL_GROUP_DEVICE = "HDDL_GROUP_DEVICE" |
[Only for HDDLPlugin] Type: A signed int wrapped in a string, default is "0". This config gives a "group id" for a certain device when this device has been reserved for certain client, client can use this device grouped by calling this group id while other client can't use this device Each device has their own group id. Device in one group shares same group id. | |
static constexpr auto | MYRIAD_ENABLE_FORCE_RESET = "MYRIAD_ENABLE_FORCE_RESET" |
The flag to reset stalled devices. This is a plugin scope option and must be used with the plugin's SetConfig method The only possible values are: CONFIG_VALUE(YES) CONFIG_VALUE(NO) (default value) | |
static constexpr auto | MYRIAD_DDR_TYPE = "MYRIAD_DDR_TYPE" |
This option allows to specify device memory type. | |
static constexpr auto | MYRIAD_DDR_AUTO = "MYRIAD_DDR_AUTO" |
Supported keys definition for InferenceEngine::MYRIAD_DDR_TYPE option. | |
static constexpr auto | MYRIAD_DDR_MICRON_2GB = "MYRIAD_DDR_MICRON_2GB" |
static constexpr auto | MYRIAD_DDR_SAMSUNG_2GB = "MYRIAD_DDR_SAMSUNG_2GB" |
static constexpr auto | MYRIAD_DDR_HYNIX_2GB = "MYRIAD_DDR_HYNIX_2GB" |
static constexpr auto | MYRIAD_DDR_MICRON_1GB = "MYRIAD_DDR_MICRON_1GB" |
static constexpr auto | MYRIAD_PROTOCOL = "MYRIAD_PROTOCOL" |
This option allows to specify protocol. | |
static constexpr auto | MYRIAD_PCIE = "MYRIAD_PCIE" |
Supported keys definition for InferenceEngine::MYRIAD_PROTOCOL option. | |
static constexpr auto | MYRIAD_USB = "MYRIAD_USB" |
static constexpr auto | MYRIAD_THROUGHPUT_STREAMS = "MYRIAD_THROUGHPUT_STREAMS" |
Optimize vpu plugin execution to maximize throughput. This option should be used with integer value which is the requested number of streams. The only possible values are: 1 2 3. | |
static constexpr auto | MYRIAD_ENABLE_HW_ACCELERATION = "MYRIAD_ENABLE_HW_ACCELERATION" |
Turn on HW stages usage (applicable for MyriadX devices only). The only possible values are: CONFIG_VALUE(YES) (default value) CONFIG_VALUE(NO) | |
static constexpr auto | MYRIAD_ENABLE_RECEIVING_TENSOR_TIME = "MYRIAD_ENABLE_RECEIVING_TENSOR_TIME" |
The flag for adding to the profiling information the time of obtaining a tensor. The only possible values are: CONFIG_VALUE(YES) CONFIG_VALUE(NO) (default value) | |
static constexpr auto | MYRIAD_CUSTOM_LAYERS = "MYRIAD_CUSTOM_LAYERS" |
This option allows to pass custom layers binding xml. If layer is present in such an xml, it would be used during inference even if the layer is natively supported. | |
Inference Engine C++ API.
using InferenceEngine::SizeVector = typedef std::vector<size_t> |
Represents tensor size.
The order is opposite to the order in Caffe*: (w,h,n,b) where the most frequently changing element in memory is first.
enum InferenceEngine::ColorFormat : uint32_t |
Extra information about input color format for preprocessing.
enum InferenceEngine::Layout : uint8_t |
Layouts that the inference engine supports.
|
noexcept |
|
noexcept |
|
noexcept |
Creates the default implementation of the Inference Engine allocator per plugin.
|
noexcept |
Creates the default instance of the extension.
ext | Extension interface |
resp | Response description |
|
inline |
Conversion from possibly-wide character string to a single-byte chain.
str | A possibly-wide character string |
|
noexcept |
Gets the current Inference Engine version.
void InferenceEngine::LowLatency | ( | InferenceEngine::CNNNetwork & | network | ) |
The transformation finds all TensorIterator layers in the network, processes all back edges that describe a connection between Result and Parameter of the TensorIterator body, and inserts ReadValue layer between Parameter and the next layers after this Parameter, and Assign layer after the layers before the Result layer. Supported platforms: CPU, GNA.
The example below describes the changes to the inner part (body, back edges) of the TensorIterator layer. [] - TensorIterator body () - new layer
before applying the transformation: back_edge_1 -> [Parameter -> some layers ... -> Result ] -> back_edge_1
after applying the transformation: back_edge_1 -> [Parameter -> (ReadValue layer) -> some layers ... -> (Assign layer) ] \ -> Result ] -> back_edge_1
It is recommended to use this transformation in conjunction with the Reshape feature to set sequence dimension to 1 and with the UnrollTensorIterator transformation. For convenience, we have already enabled the unconditional execution of the UnrollTensorIterator transformation when using the LowLatency transformation for CPU, GNA plugins, no action is required here. After applying both of these transformations, the resulting network can be inferred step by step, the states will store between inferences.
An illustrative example, not real API:
network->reshape(...) // Set sequence dimension to 1, recalculating shapes. Optional, depends on the network. LowLatency(network) // Applying LowLatency and UnrollTensorIterator transformations. network->infer (...) // Calculating new values for states. // All states are stored between inferences via Assign, ReadValue layers. network->infer (...) // Using stored states, calculating new values for states.
network | A network to apply LowLatency transformation |
TensorDesc InferenceEngine::make_roi_desc | ( | const TensorDesc & | origDesc, |
const ROI & | roi, | ||
bool | useOrigMemDesc | ||
) |
Creates a TensorDesc object for ROI.
origDesc | original TensorDesc object. |
roi | An image ROI object inside of the original object. |
useOrigMemDesc | Flag to use original memory description (strides/offset). Should be set if the new TensorDesc describes shared memory. |
std::shared_ptr<T> InferenceEngine::make_shared_blob | ( | Args &&... | args | ) |
|
inline |
Creates a copy of given TBlob instance.
TypeTo | Type of the shared pointer to be created |
arg | given pointer to blob |
|
inline |
A wrapper of CreateBlob method of RemoteContext to keep consistency with plugin-specific wrappers.
desc | Defines the layout and dims of the blob |
ctx | Pointer to the plugin object derived from RemoteContext. |
|
inline |
Creates a blob with the given tensor descriptor.
Type | Type of the shared pointer to be created |
tensorDesc | Tensor descriptor for Blob creation |
|
inline |
Creates a blob with the given tensor descriptor and allocator.
Type | Type of the shared pointer to be created |
tensorDesc | Tensor descriptor for Blob creation |
alloc | Shared pointer to IAllocator to use in the blob |
|
inline |
Creates a blob with the given tensor descriptor from the pointer to the pre-allocated memory.
Type | Type of the shared pointer to be created |
tensorDesc | TensorDesc for Blob creation |
ptr | Pointer to the pre-allocated memory |
size | Length of the pre-allocated array |
|
inlinedelete |
Creates a special shared_pointer wrapper for the given type from a specific shared module.
name | A std::string name of the shared library file |
|
inlinedelete |
Creates a special shared_pointer wrapper for the given type from a specific shared module.
T | An type of object SOPointer can hold |
name | Name of the shared library file |
name | A std::string name of the shared library file |
|
inline |
Prints a string representation of InferenceEngine::ColorFormat to a stream.
out | An output stream to send to |
fmt | A color format value to print to a stream |
out
stream
|
inline |
Prints a string representation of InferenceEngine::Layout to a stream.
out | An output stream to send to |
p | A layout value to print to a stream |
out
stream
|
inline |
Conversion from single-byte character string to a possibly-wide one.
str | A single-byte character string |