Converting a ONNX* Model¶
Introduction to ONNX¶
ONNX* is a representation format for deep learning models. ONNX allows AI developers easily transfer models between different frameworks that helps to choose the best combination for them. Today, PyTorch*, Caffe2*, Apache MXNet*, Microsoft Cognitive Toolkit* and other tools are developing ONNX support.
Supported Public ONNX Topologies¶
Model Name |
Path to Public Models master branch |
---|---|
bert_large |
|
bvlc_alexnet |
|
bvlc_googlenet |
|
bvlc_reference_caffenet |
|
bvlc_reference_rcnn_ilsvrc13 |
|
inception_v1 |
|
inception_v2 |
|
resnet50 |
|
squeezenet |
|
densenet121 |
|
emotion_ferplus |
|
mnist |
|
shufflenet |
|
VGG19 |
|
zfnet512 |
|
GPT-2 |
|
YOLOv3 |
Listed models are built with the operation set version 8 except the GPT-2 model (which uses version 10). Models that are upgraded to higher operation set versions may not be supported.
Supported PaddlePaddle* Models via ONNX Conversion¶
Starting from the R5 release, the OpenVINO™ toolkit officially supports public PaddlePaddle* models via ONNX conversion. The list of supported topologies downloadable from PaddleHub is presented below:
Model Name |
Command to download the model from PaddleHub |
---|---|
|
|
|
|
|
|
|
|
|
|
|
Note
To convert a model downloaded from PaddleHub use paddle2onnx converter.
The list of supported topologies from the models v1.5 package:
Note
To convert these topologies one should first serialize the model by calling paddle.fluid.io.save_inference_model
(description) command and after that use paddle2onnx converter.
Convert an ONNX* Model¶
The Model Optimizer process assumes you have an ONNX model that was directly downloaded from a public repository or converted from any framework that supports exporting to the ONNX format.
To convert an ONNX* model, run Model Optimizer with the path to the input model .nnet
file and an output directory where you have write permissions:
cd <INSTALL_DIR>/deployment_tools/model_optimizer/
python3 mo.py --input_model <INPUT_MODEL>.onnx --output_dir <OUTPUT_MODEL_DIR>
mo --input_model <INPUT_MODEL>.onnx --output_dir <OUTPUT_MODEL_DIR>
There are no ONNX* specific parameters, so only framework-agnostic parameters are available to convert your model. For details, see see the General Conversion Parameters section on the Converting a Model to Intermediate Representation (IR) page.
Supported ONNX* Layers¶
Refer to Supported Framework Layers for the list of supported standard layers.