Inference Engine Developer Guide

Introduction to the OpenVINO™ Toolkit

The OpenVINO™ toolkit is a comprehensive toolkit that you can use to develop and deploy vision-oriented solutions on Intel® platforms. Vision-oriented means the solutions use images or videos to perform specific tasks. A few of the solutions use cases include autonomous navigation, digital surveillance cameras, robotics, and mixed-reality headsets.

The OpenVINO™ toolkit:

  • Enables CNN-based deep learning inference on the edge
  • Supports heterogeneous execution across an Intel® CPU, Intel® Integrated Graphics, Intel® Movidius™ Neural Compute Stick and Intel® Neural Compute Stick 2
  • Speeds time-to-market via an easy-to-use library of computer vision functions and pre-optimized kernels
  • Includes optimized calls for computer vision standards including OpenCV*, OpenCL™, and OpenVX*

The OpenVINO™ toolkit includes the following components:

  • Intel® Deep Learning Deployment Toolkit (Intel® DLDT)
    • Deep Learning Model Optimizer — A cross-platform command-line tool for importing models and preparing them for optimal execution with the Deep Learning Inference Engine. The Model Optimizer supports converting Caffe*, TensorFlow*, MXNet*, Kaldi*, ONNX* models.
    • Deep Learning Inference Engine — A unified API to allow high performance inference on many hardware types including Intel® CPU, Intel® Processor Graphics, Intel® FPGA, Intel® Neural Compute Stick 2.
    • nGraph — graph representation and manipulation engine which is used to represent a model inside Inference Engine and allows the run-time model construction without using Model Optimizer.
  • OpenCV — OpenCV* community version compiled for Intel® hardware. Includes PVL libraries for computer vision.
  • Drivers and runtimes for OpenCL™ version 2.1
  • Intel® Media SDK
  • OpenVX* — Intel's implementation of OpenVX* optimized for running on Intel® hardware (CPU, GPU, IPU).
  • Demos and samples.

This Guide provides overview of the Inference Engine describing the typical workflow for performing inference of a pre-trained and optimized deep learning model and a set of sample applications.

NOTES:

  • Before you perform inference with the Inference Engine, your models should be converted to the Inference Engine format using the Model Optimizer or built directly in run-time using nGraph API. To learn about how to use Model Optimizer, refer to the Model Optimizer Developer Guide. To learn about the pre-trained and optimized models delivered with the OpenVINO™ toolkit, refer to Pre-Trained Models.
  • Intel® System Studio is an all-in-one, cross-platform tool suite, purpose-built to simplify system bring-up and improve system and IoT device application performance on Intel® platforms. If you are using the Intel® Distribution of OpenVINO™ with Intel® System Studio, go to Get Started with Intel® System Studio.

Table of Contents

Typical Next Step: Introduction to Inference Engine