Model Optimizer is a cross-platform command-line tool that facilitates the transition between the training and deployment environment, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices.
Model Optimizer process assumes you have a network model trained using a supported deep learning framework. The scheme below illustrates the typical workflow for deploying a trained deep learning model:
Model Optimizer produces an Intermediate Representation (IR) of the network, which can be read, loaded, and inferred with the Inference Engine. The Inference Engine API offers a unified API across a number of supported Intel® platforms. The Intermediate Representation is a pair of files describing the model:
.xml - Describes the network topology
.bin - Contains the weights and biases binary data.
--disable_weights_compression Model Optimizer command-line parameter to get an expanded version.
Erf operation into the
Concat operations to a single
ReorgYolo. They became a part of new
opset2 operation set and generated with
version="opset2". Before this fix, the operations were generated with
version="opset1" by mistake, they were not a part of
opset1 nGraph namespace;
opset1 specification was fixed accordingly.
MeanVarianceNormalization if normalization is performed over spatial dimensions.
Reshape with input shape values equal to -2, -3, and -4.
NOTE: Intel® System Studio is an all-in-one, cross-platform tool suite, purpose-built to simplify system bring-up and improve system and IoT device application performance on Intel® platforms. If you are using the Intel® Distribution of OpenVINO™ with Intel® System Studio, go to Get Started with Intel® System Studio.
Typical Next Step: Introduction to Intel® Deep Learning Deployment Toolkit