Use the mo.py
script from the <INSTALL_DIR>/deployment_tools/model_optimizer
directory to run the Model Optimizer and convert the model to the Intermediate Representation (IR). The simplest way to convert a model is to run mo.py
with a path to the input model file:
NOTE: Some models require using additional arguments to specify conversion parameters, such as
--scale
,--scale_values
,--mean_values
,--mean_file
. To learn about when you need to use these parameters, refer to Converting a Model Using General Conversion Parameters.
The mo.py
script is the universal entry point that can deduce the framework that has produced the input model by a standard extension of the model file:
.caffemodel
- Caffe* models.pb
- TensorFlow* models.params
- MXNet* models.onnx
- ONNX* models.nnet
- Kaldi* models.If the model files do not have standard extensions, you can use the --framework {tf,caffe,kaldi,onnx,mxnet}
option to specify the framework type explicitly.
For example, the following commands are equivalent:
To adjust the conversion process, you may use general parameters defined in the Converting a Model Using General Conversion Parameters and Framework-specific parameters for: