[LEGACY] Converting an ONNX Model#

Danger

The code described here has been deprecated! Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but you should not use it in contemporary applications.

This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the Converting an ONNX Model article.

Note

ONNX models are supported via FrontEnd API. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the inference example for more details. Using convert_model is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions.

Converting an ONNX Model#

The model conversion process assumes you have an ONNX model that was directly downloaded from a public repository or converted from any framework that supports exporting to the ONNX format.

To convert an ONNX model, run convert_model() method with the path to the <INPUT_MODEL>.onnx file:

import openvino
from openvino.tools.mo import convert_model

core = openvino.Core()
ov_model = convert_model("<INPUT_MODEL>.onnx")
compiled_model = core.compile_model(ov_model, "AUTO")

Important

The convert_model() method returns ov.Model that you can optimize, compile, or save to a file for subsequent use.

You can use mo command-line tool to convert a model to IR. The obtained IR can then be read by read_model() and inferred.

mo --input_model <INPUT_MODEL>.onnx

There are no ONNX-specific parameters, so only framework-agnostic parameters are available to convert your model. For details, see the General Conversion Parameters section in the Converting a Model to Intermediate Representation (IR) guide.

Supported ONNX Layers#

For the list of supported standard layers, refer to the Supported Operations page.

Additional Resources#

See the Model Conversion Tutorials page for a set of tutorials providing step-by-step instructions for converting specific ONNX models. Here are some examples: