[LEGACY] Converting an ONNX Model¶
The code described here has been deprecated! Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but you should not use it in contemporary applications.
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the Converting an ONNX Model article.
ONNX models are supported via FrontEnd API. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the inference example for more details. Using
convert_model is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions.
Converting an ONNX Model¶
The model conversion process assumes you have an ONNX model that was directly downloaded from a public repository or converted from any framework that supports exporting to the ONNX format.
To convert an ONNX model, run
convert_model() method with the path to the
ov_model = convert_model("<INPUT_MODEL>.onnx")
compiled_model = core.compile_model(ov_model, "AUTO")
convert_model() method returns
ov.Model that you can optimize, compile, or save to a file for subsequent use.
You can use
mo command-line tool to convert a model to IR. The obtained IR can then be read by
read_model() and inferred.
mo --input_model <INPUT_MODEL>.onnx
There are no ONNX-specific parameters, so only framework-agnostic parameters are available to convert your model. For details, see the General Conversion Parameters section in the Converting a Model to Intermediate Representation (IR) guide.
Supported ONNX Layers¶
For the list of supported standard layers, refer to the Supported Operations page.