Converting an ONNX Model

Introduction to ONNX

ONNX* is a representation format for deep learning models. ONNX allows AI developers easily transfer models between different frameworks that helps to choose the best combination for them. Today, PyTorch*, Caffe2*, Apache MXNet*, Microsoft Cognitive Toolkit* and other tools are developing ONNX support.

This page gives instructions on how to convert a model from ONNX format to OpenVINO IR format using Model Optimizer. To use Model Optimizer, install OpenVINO Development Tools by following the installation instructions here.

ONNX models are directly compatible with OpenVINO Runtime and can be loaded in their native .onnx format using net = ie.read_model("model.onnx"). The benefit of converting ONNX models to the OpenVINO IR format is that it allows them to be easily optimized for target hardware with advanced OpenVINO tools such as NNCF.

Convert an ONNX* Model

The Model Optimizer process assumes you have an ONNX model that was directly downloaded from a public repository or converted from any framework that supports exporting to the ONNX format.

To convert an ONNX* model, run Model Optimizer with the path to the input model .onnx file:

mo --input_model <INPUT_MODEL>.onnx

There are no ONNX* specific parameters, so only framework-agnostic parameters are available to convert your model. For details, see the General Conversion Parameters section on the Converting a Model to Intermediate Representation (IR) page.

Supported ONNX* Layers

Refer to Supported Framework Layers for the list of supported standard layers.

See Also

This page provided general instructions for converting ONNX models. See the Model Conversion Tutorials page for a set of tutorials that give step-by-step instructions for converting specific ONNX models. Here are some example tutorials: