Legacy Conversion API¶
Note
This part of the documentation describes a legacy approach to model conversion. Starting with OpenVINO 2023.1, a simpler alternative API for model conversion is available: openvino.convert_model
and OpenVINO Model Converter ovc
CLI tool. Refer to Model preparation for more details. If you are still using openvino.tools.mo.convert_model or mo CLI tool, you can still refer to this documentation. However, consider checking the transition guide to learn how to migrate from the legacy conversion API to the new one. Depending on the model topology, the new API can be a better option for you.
To convert a model to OpenVINO model format (ov.Model
), you can use the following command:
from openvino.tools.mo import convert_model
ov_model = convert_model(INPUT_MODEL)
mo --input_model INPUT_MODEL
If the out-of-the-box conversion (only the input_model
parameter is specified) is not successful, use the parameters mentioned below to override input shapes and cut the model:
input
andinput_shape
- the model conversion API parameters used to override original input shapes for model conversion,For more information about the parameters, refer to the Setting Input Shapes guide.
input
andoutput
- the model conversion API parameters used to define new inputs and outputs of the converted model to cut off unwanted parts (such as unsupported operations and training sub-graphs),For a more detailed description, refer to the Cutting Off Parts of a Model guide.
mean_values
,scales_values
,layout
- the parameters used to insert additional input pre-processing sub-graphs into the converted model,For more details, see the Embedding Preprocessing Computation article.
compress_to_fp16
- a compression parameter inmo
command-line tool, which allows generating IR with constants (for example, weights for convolutions and matrix multiplications) compressed toFP16
data type.For more details, refer to the Compression of a Model to FP16 guide.
To get the full list of conversion parameters, run the following command:
from openvino.tools.mo import convert_model
ov_model = convert_model(help=True)
mo --help
Examples of model conversion parameters¶
Below is a list of separate examples for different frameworks and model conversion parameters:
Launch model conversion for a TensorFlow MobileNet model in the binary protobuf format:
from openvino.tools.mo import convert_model ov_model = convert_model("MobileNet.pb")
mo --input_model MobileNet.pb
Launch model conversion for a TensorFlow BERT model in the SavedModel format with three inputs. Specify input shapes explicitly where the batch size and the sequence length equal 2 and 30 respectively:
from openvino.tools.mo import convert_model ov_model = convert_model("BERT", input_shape=[[2,30],[2,30],[2,30]])
mo --saved_model_dir BERT --input_shape [2,30],[2,30],[2,30]
For more information, refer to the Converting a TensorFlow Model guide.
Launch model conversion for an ONNX OCR model and specify new output explicitly:
from openvino.tools.mo import convert_model ov_model = convert_model("ocr.onnx", output="probabilities")
mo --input_model ocr.onnx --output probabilities
For more information, refer to the Converting an ONNX Model guide.
Note
PyTorch models must be exported to the ONNX format before conversion into IR. More information can be found in Converting a PyTorch Model.
Launch model conversion for a PaddlePaddle UNet model and apply mean-scale normalization to the input:
from openvino.tools.mo import convert_model ov_model = convert_model("unet.pdmodel", mean_values=[123,117,104], scale=255)
mo --input_model unet.pdmodel --mean_values [123,117,104] --scale 255
For more information, refer to the Converting a PaddlePaddle Model guide.
To get conversion recipes for specific TensorFlow, ONNX, and PyTorch models, refer to the Model Conversion Tutorials.
For more information about IR, see Deep Learning Network Intermediate Representation and Operation Sets in OpenVINO™.
For more information about support of neural network models trained with various frameworks, see OpenVINO Extensibility Mechanism