Introduction to Intel® Deep Learning Deployment Toolkit

Deployment Challenges

Deploying deep learning networks from the training environment to embedded platforms for inference might be a complex task that introduces a number of technical challenges that must be addressed:

Deployment Workflow

The process assumes that you have a network model trained using one of the supported frameworks. The scheme below illustrates the typical workflow for deploying a trained deep learning model:

workflow_steps.png

The steps are:

  1. Configure Model Optimizer for the specific framework (used to train your model).
  2. Run Model Optimizer to produce an optimized Intermediate Representation (IR) of the model based on the trained network topology, weights and biases values, and other optional parameters.
  3. Test the model in the IR format using the Inference Engine in the target environment with provided Inference Engine sample applications.
  4. Integrate Inference Engine in your application to deploy the model in the target environment.

Model Optimizer

Model Optimizer is a cross-platform command line tool that facilitates the transition between the training and deployment environment, performs static model analysis and automatically adjusts deep learning models for optimal execution on end-point target devices.

Model Optimizer is designed to support multiple deep learning supported frameworks and formats.

While running Model Optimizer you do not need to consider what target device you wish to use, the same output of the MO can be used in all targets.

Model Optimizer Workflow

The process assumes that you have a network model trained using one of the supported frameworks. The Model Optimizer workflow can be described as following:

The Intermediate Representation (IR) files can be read, loaded and inferred with Inference Engine. The Inference Engine API offers a unified API across a number of supported Intel® platforms.

Supported Frameworks and Formats

Supported Models

For the list of supported models refer to the framework or format specific page:

Inference Engine

Inference Engine is a runtime that delivers a unified API to integrate the inference with application logic:

The Inference Engine supports inference of multiple image classification networks, including AlexNet, GoogLeNet, VGG and ResNet families of networks, fully convolutional networks like FCN8 used for image segmentation, and object detection networks like Faster R-CNN.

For the full list of supported hardware, refer to the Supported Devices section.

The Inference Engine package contains headers, runtime libraries, and sample console applications demonstrating how you can use the Inference Engine in your applications.

See Also

Optimization Notice

For complete information about compiler optimizations, see our Optimization Notice.