Converting a PaddlePaddle Model¶
This page provides general instructions on how to convert a model from a PaddlePaddle format to the OpenVINO IR format using Model Optimizer. The instructions are different depending on PaddlePaddle model format.
Converting PaddlePaddle Model Inference Format¶
PaddlePaddle inference model includes .pdmodel
(storing model structure) and .pdiparams
(storing model weight). For how to export PaddlePaddle inference model, please refer to the Exporting PaddlePaddle Inference Model Chinese guide.
To convert a PaddlePaddle model, use the mo
script and specify the path to the input .pdmodel
model file:
mo --input_model <INPUT_MODEL>.pdmodel
For example, this command converts a yolo v3 PaddlePaddle network to OpenVINO IR network:
mo --input_model=yolov3.pdmodel --input=image,im_shape,scale_factor --input_shape=[1,3,608,608],[1,2],[1,2] --reverse_input_channels --output=save_infer_model/scale_0.tmp_1,save_infer_model/scale_1.tmp_1
Converting PaddlePaddle Model From Memory Using Python API¶
MO Python API supports passing PaddlePaddle models directly from memory.
Following PaddlePaddle model formats are supported:
paddle.hapi.model.Model
paddle.fluid.dygraph.layers.Layer
paddle.fluid.executor.Executor
Converting certain PaddlePaddle models may require setting example_input
or example_output
. Below examples show how to execute such the conversion.
Example of converting
paddle.hapi.model.Model
format model:import paddle from openvino.tools.mo import convert_model # create a paddle.hapi.model.Model format model resnet50 = paddle.vision.models.resnet50() x = paddle.static.InputSpec([1,3,224,224], 'float32', 'x') y = paddle.static.InputSpec([1,1000], 'float32', 'y') model = paddle.Model(resnet50, x, y) # convert to OpenVINO IR format ov_model = convert_model(model) # optional: serialize OpenVINO IR to *.xml & *.bin from openvino.runtime import serialize serialize(ov_model, "ov_model.xml", "ov_model.bin")
Example of converting
paddle.fluid.dygraph.layers.Layer
format model:example_input
is required whileexample_output
is optional, which accept the following formats:list
with tensor(paddle.Tensor
) or InputSpec(paddle.static.input.InputSpec
)import paddle from openvino.tools.mo import convert_model # create a paddle.fluid.dygraph.layers.Layer format model model = paddle.vision.models.resnet50() x = paddle.rand([1,3,224,224]) # convert to OpenVINO IR format ov_model = convert_model(model, example_input=[x])
Example of converting
paddle.fluid.executor.Executor
format model:example_input
andexample_output
are required, which accept the following formats:list
ortuple
with variable(paddle.static.data
)import paddle from openvino.tools.mo import convert_model paddle.enable_static() # create a paddle.fluid.executor.Executor format model x = paddle.static.data(name="x", shape=[1,3,224]) y = paddle.static.data(name="y", shape=[1,3,224]) relu = paddle.nn.ReLU() sigmoid = paddle.nn.Sigmoid() y = sigmoid(relu(x)) exe = paddle.static.Executor(paddle.CPUPlace()) exe.run(paddle.static.default_startup_program()) # convert to OpenVINO IR format ov_model = convert_model(exe, example_input=[x], example_output=[y])
Supported PaddlePaddle Layers¶
For the list of supported standard layers, refer to the Supported Framework Layers page.
Officially Supported PaddlePaddle Models¶
The following PaddlePaddle models have been officially validated and confirmed to work (as of OpenVINO 2022.1):
Model Name |
Model Type |
Description |
---|---|---|
ppocr-det |
optical character recognition |
|
ppocr-rec |
optical character recognition |
|
ResNet-50 |
classification |
Models are exported from PaddleClas. Refer to getting_started_en.md. |
MobileNet v2 |
classification |
Models are exported from PaddleClas. Refer to getting_started_en.md. |
MobileNet v3 |
classification |
Models are exported from PaddleClas. Refer to getting_started_en.md. |
BiSeNet v2 |
semantic segmentation |
Models are exported from PaddleSeg. Refer to model_export.md. |
DeepLab v3 plus |
semantic segmentation |
Models are exported from PaddleSeg. Refer to model_export.md. |
Fast-SCNN |
semantic segmentation |
Models are exported from PaddleSeg. Refer to model_export.md. |
OCRNET |
semantic segmentation |
Models are exported from PaddleSeg. Refer to model_export.md. |
Yolo v3 |
detection |
Models are exported from PaddleDetection. Refer to EXPORT_MODEL.md. |
ppyolo |
detection |
Models are exported from PaddleDetection. Refer to EXPORT_MODEL.md. |
MobileNetv3-SSD |
detection |
Models are exported from PaddleDetection. Refer to EXPORT_MODEL.md. |
U-Net |
semantic segmentation |
Models are exported from PaddleSeg. Refer to model_export.md. |
BERT |
language representation |
Frequently Asked Questions (FAQ)¶
When Model Optimizer is unable to run to completion due to typographical errors, incorrectly used options, or other issues, it provides explanatory messages. They describe the potential cause of the problem and give a link to the Model Optimizer FAQ, which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections in the Model Optimizer Developer Guide to help you understand what went wrong.
Additional Resources¶
See the Model Conversion Tutorials page for a set of tutorials providing step-by-step instructions for converting specific PaddlePaddle models.