API to get information about the system, core processor capabilities. More...
Functions | |
bool | InferenceEngine::checkOpenMpEnvVars (bool includeOMPNumThreads=true) |
Checks whether OpenMP environment variables are defined. More... | |
std::vector< int > | InferenceEngine::getAvailableNUMANodes () |
Returns available CPU NUMA nodes (on Linux, and Windows [only with TBB], single node is assumed on all other OSes) More... | |
int | InferenceEngine::getNumberOfCPUCores () |
Returns number of CPU physical cores on Linux/Windows (which is considered to be more performance friendly for servers) (on other OSes it simply relies on the original parallel API of choice, which usually uses the logical cores ) More... | |
bool | InferenceEngine::with_cpu_x86_sse42 () |
Checks whether CPU supports SSE 4.2 capability. More... | |
bool | InferenceEngine::with_cpu_x86_avx () |
Checks whether CPU supports AVX capability. More... | |
bool | InferenceEngine::with_cpu_x86_avx2 () |
Checks whether CPU supports AVX2 capability. More... | |
bool | InferenceEngine::with_cpu_x86_avx512f () |
Checks whether CPU supports AVX 512 capability. More... | |
bool | InferenceEngine::with_cpu_x86_avx512_core () |
Checks whether CPU supports AVX 512 capability. More... | |
bool | InferenceEngine::with_cpu_x86_bfloat16 () |
Checks whether CPU supports BFloat16 capability. More... | |
API to get information about the system, core processor capabilities.
bool InferenceEngine::checkOpenMpEnvVars | ( | bool | includeOMPNumThreads = true | ) |
Checks whether OpenMP environment variables are defined.
[in] | includeOMPNumThreads | Indicates if the omp number threads is included |
True
if any OpenMP environment variable is defined, false
otherwise std::vector<int> InferenceEngine::getAvailableNUMANodes | ( | ) |
Returns available CPU NUMA nodes (on Linux, and Windows [only with TBB], single node is assumed on all other OSes)
int InferenceEngine::getNumberOfCPUCores | ( | ) |
Returns number of CPU physical cores on Linux/Windows (which is considered to be more performance friendly for servers) (on other OSes it simply relies on the original parallel API of choice, which usually uses the logical cores )
bool InferenceEngine::with_cpu_x86_avx | ( | ) |
Checks whether CPU supports AVX capability.
True
is AVX instructions are available, false
otherwise bool InferenceEngine::with_cpu_x86_avx2 | ( | ) |
Checks whether CPU supports AVX2 capability.
True
is AVX2 instructions are available, false
otherwise bool InferenceEngine::with_cpu_x86_avx512_core | ( | ) |
Checks whether CPU supports AVX 512 capability.
True
is AVX512F, AVX512BW, AVX512DQ instructions are available, false
otherwise bool InferenceEngine::with_cpu_x86_avx512f | ( | ) |
Checks whether CPU supports AVX 512 capability.
True
is AVX512F (foundation) instructions are available, false
otherwise bool InferenceEngine::with_cpu_x86_bfloat16 | ( | ) |
Checks whether CPU supports BFloat16 capability.
True
is tAVX512_BF16 instructions are available, false
otherwise bool InferenceEngine::with_cpu_x86_sse42 | ( | ) |
Checks whether CPU supports SSE 4.2 capability.
True
is SSE 4.2 instructions are available, false
otherwise