Model Optimizer is a cross-platform command-line tool that facilitates the transition between the training and deployment environment, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices.
Model Optimizer process assumes you have a network model trained using a supported deep learning framework. The scheme below illustrates the typical workflow for deploying a trained deep learning model:
Model Optimizer produces an Intermediate Representation (IR) of the network, which can be read, loaded, and inferred with the Inference Engine. The Inference Engine API offers a unified API across a number of supported Intel® platforms. The Intermediate Representation is a pair of files describing the model:
.xml
- Describes the network topology
.bin
- Contains the weights and biases binary data.
What's New in the Model Optimizer in this Release?
- ONNX*:
- Added support of the following ONNX* operations: Abs, Acos, Asin, Atan, Cast, Ceil, Cos, Cosh, Div, Erf, Floor, HardSigmoid, Log, NonMaxSuppression, OneHot, ReduceMax, ReduceProd, Resize (opset 10), Sign, Sin, Sqrt, Tan, Xor
- Added support of the following ONNX* model zoo:
- TensorFlow*:
- Added optimization transformation which detects Mean Value Normalization pattern for 5D input and replaces it with a single MVN layer.
- Added ability to read TensorFlow 1.X models when TensorFlowF 2.x is installed. TensorFlow 2.X models are not supported.
- Changed command line to convert GNMT model. Refer to the GNMT model conversion article for more information.
- Deprecated "--tensorflow_subgraph_patterns", "--tensorflow_operation_patterns" command line parameters. The TensorFlow offload feature will be removed from the future releases.
- Added support of the following TensorFlow* operations: Bucketize (CPU only), Cast, Cos, Cosh, ExperimentalSparseWeightedSum (CPU only), Log1p, NonMaxSuppressionV3, NonMaxSuppressionV4, NonMaxSuppressionV5, Sin, Sinh, SparseToDense (CPU only), SparseReshape(removed when input and output shapes are equal), Tan, Tanh.
- Added support of the following TensorFlow* models:
- MXNet*:
- Added support of the following MXNet* operations: UpSampling with bilinear mode, Where, _arange, _contrib_AdaptiveAvgPooling2D, div_scalar, elementwise_sub, exp, expand_dims, greater_scalar, minus_scalar, repeat, slice, slice_like, tile.
- Added support of the following MXNet* models: YoloV3 model from the GluonCV model zoo.
- Kaldi*:
- The "--remove_output_softmax" command line parameter now triggers removal of final LogSoftmax layer in addition to a pure Softmax layer.
- Added support for the following Kaldi* operations: linearcomponent, logsoftmax.
- Added support of the following Kaldi* models:
- Common changes:
- Model Optimizer generates IR version 10 by default (except for the Kaldi* framework for which IR version 7 is generated) with significantly changed operations semantic. The command line "--generate_deprecated_IR_V7" could be used to generate older version of IR. Refer to the documentation for the specification of a new operations set.
- "--tensorflow_use_custom_operations_config" has been renamed to "--transformations_config". The old command line parameter is deprecated and will be removed in the future releases.
- Added ability to specify input data type using the "--input" command line parameter. For example, "--input placeholder{i32}[1 300 300 3]". Refer to the documentation for more examples.
- The IR v7 XML file format has been updated and now layer output port has an attribute "precision" with data type of the produced tensor.
- Added support for FusedBatchNorm operation in training mode.
- Added optimisation transformation to remove useless Concat+Split sub-graphs.
- A number of graph transformations were moved from the Model Optimizer to the Inference Engine.
- Fixed networkX 2.4+ compatibility issues.
NOTE: Intel® System Studio is an all-in-one, cross-platform tool suite, purpose-built to simplify system bring-up and improve system and IoT device application performance on Intel® platforms. If you are using the Intel® Distribution of OpenVINO™ with Intel® System Studio, go to Get Started with Intel® System Studio.
Table of Content
Typical Next Step: Introduction to Intel® Deep Learning Deployment Toolkit