Converting a PaddlePaddle Model¶
To convert a PaddlePaddle model, use the mo
script and specify the path to the input .pdmodel
model file:
mo --input_model <INPUT_MODEL>.pdmodel
For example, this command converts a yolo v3 PaddlePaddle network to OpenVINO IR network:
mo --input_model=yolov3.pdmodel --input=image,im_shape,scale_factor --input_shape=[1,3,608,608],[1,2],[1,2] --reverse_input_channels --output=save_infer_model/scale_0.tmp_1,save_infer_model/scale_1.tmp_1
Supported PaddlePaddle Layers¶
For the list of supported standard layers, refer to the Supported Framework Layers page.
Officially Supported PaddlePaddle Models¶
The following PaddlePaddle models have been officially validated and confirmed to work (as of OpenVINO 2022.1):
Model Name |
Model Type |
Description |
---|---|---|
ppocr-det |
optical character recognition |
|
ppocr-rec |
optical character recognition |
|
ResNet-50 |
classification |
Models are exported from PaddleClas. Refer to getting_started_en.md. |
MobileNet v2 |
classification |
Models are exported from PaddleClas. Refer to getting_started_en.md. |
MobileNet v3 |
classification |
Models are exported from PaddleClas. Refer to getting_started_en.md. |
BiSeNet v2 |
semantic segmentation |
Models are exported from PaddleSeg. Refer to model_export.md. |
DeepLab v3 plus |
semantic segmentation |
Models are exported from PaddleSeg. Refer to model_export.md. |
Fast-SCNN |
semantic segmentation |
Models are exported from PaddleSeg. Refer to model_export.md. |
OCRNET |
semantic segmentation |
Models are exported from PaddleSeg. Refer to model_export.md. |
Yolo v3 |
detection |
Models are exported from PaddleDetection. Refer to EXPORT_MODEL.md. |
ppyolo |
detection |
Models are exported from PaddleDetection. Refer to EXPORT_MODEL.md. |
MobileNetv3-SSD |
detection |
Models are exported from PaddleDetection. Refer to EXPORT_MODEL.md. |
U-Net |
semantic segmentation |
Models are exported from PaddleSeg. Refer to model_export.md. |
BERT |
language representation |
Frequently Asked Questions (FAQ)¶
When Model Optimizer is unable to run to completion due to typographical errors, incorrectly used options, or other issues, it provides explanatory messages. They describe the potential cause of the problem and give a link to the Model Optimizer FAQ, which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections in the Model Optimizer Developer Guide to help you understand what went wrong.
Additional Resources¶
See the Model Conversion Tutorials page for a set of tutorials providing step-by-step instructions for converting specific PaddlePaddle models.