namespace InferenceEngine::Metrics

Overview

Metrics More…

namespace Metrics {

// enums

enum DeviceType;

// global variables

static constexpr auto METRIC_AVAILABLE_DEVICES = #  AVAILABLE_DEVICES;
static constexpr auto METRIC_SUPPORTED_METRICS = #  SUPPORTED_METRICS;
static constexpr auto METRIC_SUPPORTED_CONFIG_KEYS = #  SUPPORTED_CONFIG_KEYS;
static constexpr auto METRIC_FULL_DEVICE_NAME = #  FULL_DEVICE_NAME;
static constexpr auto METRIC_OPTIMIZATION_CAPABILITIES = #  OPTIMIZATION_CAPABILITIES;
static constexpr auto FP32 = "FP32";
static constexpr auto BF16 = "BF16";
static constexpr auto FP16 = "FP16";
static constexpr auto INT8 = "INT8";
static constexpr auto BIN = "BIN";
static constexpr auto WINOGRAD = "WINOGRAD";
static constexpr auto BATCHED_BLOB = "BATCHED_BLOB";
static constexpr auto METRIC_RANGE_FOR_STREAMS = #  RANGE_FOR_STREAMS;
static constexpr auto METRIC_OPTIMAL_BATCH_SIZE = #  OPTIMAL_BATCH_SIZE;
static constexpr auto METRIC_MAX_BATCH_SIZE = #  MAX_BATCH_SIZE;
static constexpr auto METRIC_RANGE_FOR_ASYNC_INFER_REQUESTS = #  RANGE_FOR_ASYNC_INFER_REQUESTS;
static constexpr auto METRIC_NUMBER_OF_WAITING_INFER_REQUESTS = #  NUMBER_OF_WAITING_INFER_REQUESTS;
static constexpr auto METRIC_NUMBER_OF_EXEC_INFER_REQUESTS = #  NUMBER_OF_EXEC_INFER_REQUESTS;
static constexpr auto METRIC_DEVICE_ARCHITECTURE = #  DEVICE_ARCHITECTURE;
static constexpr auto METRIC_DEVICE_TYPE = #  DEVICE_TYPE;
static constexpr auto METRIC_DEVICE_GOPS = #  DEVICE_GOPS;
static constexpr auto METRIC_IMPORT_EXPORT_SUPPORT = #  IMPORT_EXPORT_SUPPORT;
static constexpr auto METRIC_NETWORK_NAME = #  NETWORK_NAME;
static constexpr auto METRIC_DEVICE_THERMAL = #  DEVICE_THERMAL;
static constexpr auto METRIC_OPTIMAL_NUMBER_OF_INFER_REQUESTS = #  OPTIMAL_NUMBER_OF_INFER_REQUESTS;

} // namespace Metrics

Detailed Documentation

Metrics

Global Variables

static constexpr auto METRIC_AVAILABLE_DEVICES = #  AVAILABLE_DEVICES

Metric to get a std::vector<std::string> of available device IDs. String value is “AVAILABLE_DEVICES”.

static constexpr auto METRIC_SUPPORTED_METRICS = #  SUPPORTED_METRICS

Metric to get a std::vector<std::string> of supported metrics. String value is “SUPPORTED_METRICS”.

This can be used as an executable network metric as well.

Each of the returned device metrics can be passed to Core::GetMetric, executable network metrics can be passed to ExecutableNetwork::GetMetric.

static constexpr auto METRIC_SUPPORTED_CONFIG_KEYS = #  SUPPORTED_CONFIG_KEYS

Metric to get a std::vector<std::string> of supported config keys. String value is “SUPPORTED_CONFIG_KEYS”.

This can be used as an executable network metric as well.

Each of the returned device configuration keys can be passed to Core::SetConfig, Core::GetConfig, and Core::LoadNetwork, configuration keys for executable networks can be passed to ExecutableNetwork::SetConfig and ExecutableNetwork::GetConfig.

static constexpr auto METRIC_FULL_DEVICE_NAME = #  FULL_DEVICE_NAME

Metric to get a std::string value representing a full device name. String value is “FULL_DEVICE_NAME”.

static constexpr auto METRIC_OPTIMIZATION_CAPABILITIES = #  OPTIMIZATION_CAPABILITIES

Metric to get a std::vector<std::string> of optimization options per device. String value is “OPTIMIZATION_CAPABILITIES”.

The possible values:

  • “FP32” - device can support FP32 models

  • “BF16” - device can support BF16 computations for models

  • “FP16” - device can support FP16 models

  • “INT8” - device can support models with INT8 layers

  • “BIN” - device can support models with BIN layers

  • “WINOGRAD” - device can support models where convolution implemented via Winograd transformations

  • “BATCHED_BLOB” - device can support BatchedBlob

static constexpr auto METRIC_RANGE_FOR_STREAMS = #  RANGE_FOR_STREAMS

Metric to provide information about a range for streams on platforms where streams are supported.

Metric returns a value of std::tuple<unsigned int, unsigned int> type, where:

  • First value is bottom bound.

  • Second value is upper bound. String value for metric name is “RANGE_FOR_STREAMS”.

static constexpr auto METRIC_OPTIMAL_BATCH_SIZE = #  OPTIMAL_BATCH_SIZE

Metric to query information optimal batch size for the given device and the network.

Metric returns a value of unsigned int type, Returns optimal batch size for a given network on the given device. The returned value is aligned to power of 2. Also, MODEL_PTR is the required option for this metric since the optimal batch size depends on the model, so if the MODEL_PTR is not given, the result of the metric is always 1. For the GPU the metric is queried automatically whenever the OpenVINO performance hint for the throughput is used, so that the result (>1) governs the automatic batching (transparently to the application). The automatic batching can be disabled with ALLOW_AUTO_BATCHING set to NO

static constexpr auto METRIC_MAX_BATCH_SIZE = #  MAX_BATCH_SIZE

Metric to get maximum batch size which does not cause performance degradation due to memory swap impact.

Metric returns a value of unsigned int type, Note that the returned value may not aligned to power of 2. Also, MODEL_PTR is the required option for this metric since the available max batch size depends on the model size. If the MODEL_PTR is not given, it will return 1.

static constexpr auto METRIC_RANGE_FOR_ASYNC_INFER_REQUESTS = #  RANGE_FOR_ASYNC_INFER_REQUESTS

Metric to provide a hint for a range for number of async infer requests. If device supports streams, the metric provides range for number of IRs per stream.

Metric returns a value of std::tuple<unsigned int, unsigned int, unsigned int> type, where:

  • First value is bottom bound.

  • Second value is upper bound.

  • Third value is step inside this range. String value for metric name is “RANGE_FOR_ASYNC_INFER_REQUESTS”.

static constexpr auto METRIC_NUMBER_OF_WAITING_INFER_REQUESTS = #  NUMBER_OF_WAITING_INFER_REQUESTS

Metric to get an unsigned int value of number of waiting infer request.

String value is “NUMBER_OF_WAITNING_INFER_REQUESTS”. This can be used as an executable network metric as well

static constexpr auto METRIC_NUMBER_OF_EXEC_INFER_REQUESTS = #  NUMBER_OF_EXEC_INFER_REQUESTS

Metric to get an unsigned int value of number of infer request in execution stage.

String value is “NUMBER_OF_EXEC_INFER_REQUESTS”. This can be used as an executable network metric as well

static constexpr auto METRIC_DEVICE_ARCHITECTURE = #  DEVICE_ARCHITECTURE

Metric which defines the device architecture.

static constexpr auto METRIC_DEVICE_TYPE = #  DEVICE_TYPE

Metric to get a type of device. See DeviceType enum definition for possible return values.

static constexpr auto METRIC_DEVICE_GOPS = #  DEVICE_GOPS

Metric which defines Giga OPS per second count (GFLOPS or GIOPS) for a set of precisions supported by specified device.

static constexpr auto METRIC_IMPORT_EXPORT_SUPPORT = #  IMPORT_EXPORT_SUPPORT

Metric which defines support of import/export functionality by plugin.

static constexpr auto METRIC_NETWORK_NAME = #  NETWORK_NAME

Metric to get a name of network. String value is “NETWORK_NAME”.

static constexpr auto METRIC_DEVICE_THERMAL = #  DEVICE_THERMAL

Metric to get a float of device thermal. String value is “DEVICE_THERMAL”.

static constexpr auto METRIC_OPTIMAL_NUMBER_OF_INFER_REQUESTS = #  OPTIMAL_NUMBER_OF_INFER_REQUESTS

Metric to get an unsigned integer value of optimal number of executable network infer requests.