Converting a PaddlePaddle Model¶
This page provides general instructions on how to convert a model from the PaddlePaddle format to the OpenVINO IR format using OpenVINO model conversion API. The instructions are different depending on the PaddlePaddle model format.
Note
PaddlePaddle model serialized in a file can be loaded by openvino.Core.read_model or openvino.Core.compile_model methods by OpenVINO runtime API without preparing OpenVINO IR first. Refer to the inference example for more details. Using openvino.convert_model is still recommended if model load latency matters for the inference application.
Converting PaddlePaddle Model Files¶
PaddlePaddle inference model includes .pdmodel (storing model structure) and .pdiparams (storing model weight). For details on how to export a PaddlePaddle inference model, refer to the Exporting PaddlePaddle Inference Model Chinese guide.
To convert a PaddlePaddle model, use the ovc or openvino.convert_model and specify the path to the input .pdmodel model file:
import openvino as ov
ov.convert_model('your_model_file.pdmodel')
ovc your_model_file.pdmodel
For example, this command converts a YOLOv3 PaddlePaddle model to OpenVINO IR model:
import openvino as ov
ov.convert_model('yolov3.pdmodel')
ovc yolov3.pdmodel
Converting PaddlePaddle Python Model¶
Model conversion API supports passing PaddlePaddle models directly in Python without saving them to files in the user code.
Following PaddlePaddle model object types are supported:
paddle.hapi.model.Modelpaddle.fluid.dygraph.layers.Layerpaddle.fluid.executor.Executor
Some PaddlePaddle models may require setting example_input or output for conversion as shown in the examples below:
Example of converting
paddle.hapi.model.Modelformat model:import paddle import openvino as ov # create a paddle.hapi.model.Model format model resnet50 = paddle.vision.models.resnet50() x = paddle.static.InputSpec([1,3,224,224], 'float32', 'x') y = paddle.static.InputSpec([1,1000], 'float32', 'y') model = paddle.Model(resnet50, x, y) # convert to OpenVINO IR format ov_model = ov.convert_model(model) ov.save_model(ov_model, "resnet50.xml")
Example of converting
paddle.fluid.dygraph.layers.Layerformat model:example_inputis required whileoutputis optional.example_inputaccepts the following formats:listwith tensor (paddle.Tensor) or InputSpec (paddle.static.input.InputSpec)import paddle import openvino as ov # create a paddle.fluid.dygraph.layers.Layer format model model = paddle.vision.models.resnet50() x = paddle.rand([1,3,224,224]) # convert to OpenVINO IR format ov_model = ov.convert_model(model, example_input=[x])
Example of converting
paddle.fluid.executor.Executorformat model:example_inputandoutputare required, which accept the following formats:listortuplewith variable(paddle.static.data)import paddle import openvino as ov paddle.enable_static() # create a paddle.fluid.executor.Executor format model x = paddle.static.data(name="x", shape=[1,3,224]) y = paddle.static.data(name="y", shape=[1,3,224]) relu = paddle.nn.ReLU() sigmoid = paddle.nn.Sigmoid() y = sigmoid(relu(x)) exe = paddle.static.Executor(paddle.CPUPlace()) exe.run(paddle.static.default_startup_program()) # convert to OpenVINO IR format ov_model = ov.convert_model(exe, example_input=[x], output=[y])
Supported PaddlePaddle Layers¶
For the list of supported standard layers, refer to the Supported Operations page.
Officially Supported PaddlePaddle Models¶
The following PaddlePaddle models have been officially validated and confirmed to work (as of OpenVINO 2022.1):
Model Name |
Model Type |
Description |
|---|---|---|
ppocr-det |
optical character recognition |
|
ppocr-rec |
optical character recognition |
|
ResNet-50 |
classification |
Models are exported from PaddleClas. Refer to getting_started_en.md. |
MobileNet v2 |
classification |
Models are exported from PaddleClas. Refer to getting_started_en.md. |
MobileNet v3 |
classification |
Models are exported from PaddleClas. Refer to getting_started_en.md. |
BiSeNet v2 |
semantic segmentation |
Models are exported from PaddleSeg. Refer to model_export.md. |
DeepLab v3 plus |
semantic segmentation |
Models are exported from PaddleSeg. Refer to model_export.md. |
Fast-SCNN |
semantic segmentation |
Models are exported from PaddleSeg. Refer to model_export.md. |
OCRNET |
semantic segmentation |
Models are exported from PaddleSeg. Refer to model_export.md. |
Yolo v3 |
detection |
Models are exported from PaddleDetection. Refer to EXPORT_MODEL.md. |
ppyolo |
detection |
Models are exported from PaddleDetection. Refer to EXPORT_MODEL.md. |
MobileNetv3-SSD |
detection |
Models are exported from PaddleDetection. Refer to EXPORT_MODEL.md. |
U-Net |
semantic segmentation |
Models are exported from PaddleSeg. Refer to model_export.md. |
BERT |
language representation |
Additional Resources¶
Check out more examples of model conversion in interactive Python tutorials.