Model Optimizer Developer Guide

Model Optimizer is a cross-platform command-line tool that facilitates the transition between the training and deployment environment, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices.

Model Optimizer process assumes you have a network model trained using a supported deep learning framework. The scheme below illustrates the typical workflow for deploying a trained deep learning model:

workflow_steps.png

Model Optimizer produces an Intermediate Representation (IR) of the network, which can be read, loaded, and inferred with the Inference Engine. The Inference Engine API offers a unified API across a number of supported Intel® platforms. The Intermediate Representation is a pair of files describing the model:

What's New in the Model Optimizer in this Release?

Notice that certain topology-specific layers (like DetectionOutput used in the SSD*) are now shipped in a source code, which assumes the extensions library is compiled/loaded. The extensions are also required for the pre-trained models inference.

Table of Content

Typical Next Step: Introduction to Intel® Deep Learning Deployment Toolkit