Converting a ONNX* Model

Introduction to ONNX

ONNX* is a representation format for deep learning models. ONNX allows AI developers easily transfer models between different frameworks that helps to choose the best combination for them. Today, PyTorch*, Caffe2*, Apache MXNet*, Microsoft Cognitive Toolkit* and other tools are developing ONNX support.

Supported Public ONNX Topologies

Model Name Path to Public Models master branch
bert_large model archive
bvlc_alexnet model archive
bvlc_googlenet model archive
bvlc_reference_caffenet model archive
bvlc_reference_rcnn_ilsvrc13 model archive
inception_v1 model archive
inception_v2 model archive
resnet50 model archive
squeezenet model archive
densenet121 model archive
emotion_ferplus model archive
mnist model archive
shufflenet model archive
VGG19 model archive
zfnet512 model archive
GPT-2 model archive
YOLOv3 model archive

Listed models are built with the operation set version 8 except the GPT-2 model. Models that are upgraded to higher operation set versions may not be supported.

Supported PaddlePaddle* Models via ONNX Conversion

Starting from the R5 release, the OpenVINO™ toolkit officially supports public PaddlePaddle* models via ONNX conversion. The list of supported topologies downloadable from PaddleHub is presented below:

Model Name Command to download the model from PaddleHub
MobileNetV2 hub install mobilenet_v2_imagenet==1.0.1
ResNet18 hub install resnet_v2_18_imagenet==1.0.0
ResNet34 hub install resnet_v2_34_imagenet==1.0.0
ResNet50 hub install resnet_v2_50_imagenet==1.0.1
ResNet101 hub install resnet_v2_101_imagenet==1.0.1
ResNet152 hub install resnet_v2_152_imagenet==1.0.1

NOTE: To convert a model downloaded from PaddleHub use paddle2onnx converter.

The list of supported topologies from the models v1.5 package:

NOTE: To convert these topologies one should first serialize the model by calling paddle.fluid.io.save_inference_model

(description) command and after that use paddle2onnx converter.

Convert an ONNX* Model

The Model Optimizer process assumes you have an ONNX model that was directly downloaded from a public repository or converted from any framework that supports exporting to the ONNX format.

To convert an ONNX* model:

  1. Go to the <INSTALL_DIR>/deployment_tools/model_optimizer directory.
  2. Use the mo.py script to simply convert a model with the path to the input model .nnet file:
    python3 mo.py --input_model <INPUT_MODEL>.onnx

There are no ONNX* specific parameters, so only framework-agnostic parameters are available to convert your model.

Supported ONNX* Layers

Refer to Supported Framework Layers for the list of supported standard layers.