Inference Engine Developer Guide

Introduction

Inference Engine is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice. Use the Inference Engine API to read the Intermediate Representation (IR), ONNX and execute the model on devices.

Inference Engine uses a plugin architecture. Inference Engine plugin is a software component that contains complete implementation for inference on a certain Intel® hardware device: CPU, GPU, VPU, etc. Each plugin implements the unified API and provides additional hardware-specific APIs.

The scheme below illustrates the typical workflow for deploying a trained deep learning model:

_images/BASIC_FLOW_IE_C.svg

* nGraph is the internal graph representation in the OpenVINO™ toolkit. Use it to build a model from source code.

Video

Inference Engine Concept. Duration: 3:43