Supported Model Formats¶
In the Python API, these options are provided as three separate methods:
read_model()
, compile_model()
, and convert_model()
.
The convert_model()
method enables you to perform additional adjustments
to the model, such as setting shapes, changing model input types or layouts,
cutting parts of the model, freezing inputs, etc. For a detailed description
of the conversion process, see the
model conversion guide.
Note that for PyTorch models, Python API is the only conversion option.
TensorFlow may present additional considerations TensorFlow Frontend Capabilities and Limitations.
Here are code examples of how to use these methods with different model formats:
The
convert_model()
method:This is the only method applicable to PyTorch models.
List of supported formats:
Python objects:
torch.nn.Module
torch.jit.ScriptModule
torch.jit.ScriptFunction
model = torchvision.models.resnet50(weights='DEFAULT') ov_model = convert_model(model) compiled_model = core.compile_model(ov_model, "AUTO")
For more details on conversion, refer to the guide and an example tutorial on this topic.
The
convert_model()
method:When you use the
convert_model()
method, you have more control and you can specify additional adjustments forov.Model
. Theread_model()
andcompile_model()
methods are easier to use, however, they do not have such capabilities. Withov.Model
you can choose to optimize, compile and run inference on it or serialize it into a file for subsequent use.List of supported formats:
Files:
SavedModel -
<SAVED_MODEL_DIRECTORY>
or<INPUT_MODEL>.pb
Checkpoint -
<INFERENCE_GRAPH>.pb
or<INFERENCE_GRAPH>.pbtxt
MetaGraph -
<INPUT_META_GRAPH>.meta
Python objects:
tf.keras.Model
tf.keras.layers.Layer
tf.Module
tf.compat.v1.Graph
tf.compat.v1.GraphDef
tf.function
tf.compat.v1.session
tf.train.checkpoint
ov_model = convert_model("saved_model.pb") compiled_model = core.compile_model(ov_model, "AUTO")
For more details on conversion, refer to the guide and an example tutorial on this topic.
The
read_model()
andcompile_model()
methods:List of supported formats:
Files:
SavedModel -
<SAVED_MODEL_DIRECTORY>
or<INPUT_MODEL>.pb
Checkpoint -
<INFERENCE_GRAPH>.pb
or<INFERENCE_GRAPH>.pbtxt
MetaGraph -
<INPUT_META_GRAPH>.meta
ov_model = read_model("saved_model.pb") compiled_model = core.compile_model(ov_model, "AUTO")
For a guide on how to run inference, see how to Integrate OpenVINO™ with Your Application. For TensorFlow format, see TensorFlow Frontend Capabilities and Limitations.
The
compile_model()
method:List of supported formats:
Files:
SavedModel -
<SAVED_MODEL_DIRECTORY>
or<INPUT_MODEL>.pb
Checkpoint -
<INFERENCE_GRAPH>.pb
or<INFERENCE_GRAPH>.pbtxt
MetaGraph -
<INPUT_META_GRAPH>.meta
ov::CompiledModel compiled_model = core.compile_model("saved_model.pb", "AUTO");
For a guide on how to run inference, see how to Integrate OpenVINO™ with Your Application.
The
compile_model()
method:List of supported formats:
Files:
SavedModel -
<SAVED_MODEL_DIRECTORY>
or<INPUT_MODEL>.pb
Checkpoint -
<INFERENCE_GRAPH>.pb
or<INFERENCE_GRAPH>.pbtxt
MetaGraph -
<INPUT_META_GRAPH>.meta
ov_compiled_model_t* compiled_model = NULL; ov_core_compile_model_from_file(core, "saved_model.pb", "AUTO", 0, &compiled_model);
For a guide on how to run inference, see how to Integrate OpenVINO™ with Your Application.
You can use mo
command-line tool to convert a model to IR. The obtained IR can then be read by read_model()
and inferred.
mo --input_model <INPUT_MODEL>.pb
For details on the conversion, refer to the article.
The
convert_model()
method:When you use the
convert_model()
method, you have more control and you can specify additional adjustments forov.Model
. Theread_model()
andcompile_model()
methods are easier to use, however, they do not have such capabilities. Withov.Model
you can choose to optimize, compile and run inference on it or serialize it into a file for subsequent use.List of supported formats:
Files:
<INPUT_MODEL>.tflite
ov_model = convert_model("<INPUT_MODEL>.tflite") compiled_model = core.compile_model(ov_model, "AUTO")
For more details on conversion, refer to the guide and an example tutorial on this topic.
The
read_model()
method:List of supported formats:
Files:
<INPUT_MODEL>.tflite
ov_model = read_model("<INPUT_MODEL>.tflite") compiled_model = core.compile_model(ov_model, "AUTO")
The
compile_model()
method:List of supported formats:
Files:
<INPUT_MODEL>.tflite
compiled_model = core.compile_model("<INPUT_MODEL>.tflite", "AUTO")
For a guide on how to run inference, see how to Integrate OpenVINO™ with Your Application.
The
compile_model()
method:List of supported formats:
Files:
<INPUT_MODEL>.tflite
ov::CompiledModel compiled_model = core.compile_model("<INPUT_MODEL>.tflite", "AUTO");
For a guide on how to run inference, see how to Integrate OpenVINO™ with Your Application.
The
compile_model()
method:List of supported formats:
Files:
<INPUT_MODEL>.tflite
ov_compiled_model_t* compiled_model = NULL; ov_core_compile_model_from_file(core, "<INPUT_MODEL>.tflite", "AUTO", 0, &compiled_model);
For a guide on how to run inference, see how to Integrate OpenVINO™ with Your Application.
The
convert_model()
method:You can use
mo
command-line tool to convert a model to IR. The obtained IR can then be read byread_model()
and inferred.List of supported formats:
Files:
<INPUT_MODEL>.tflite
mo --input_model <INPUT_MODEL>.tflite
For details on the conversion, refer to the article.
The
convert_model()
method:When you use the
convert_model()
method, you have more control and you can specify additional adjustments forov.Model
. Theread_model()
andcompile_model()
methods are easier to use, however, they do not have such capabilities. Withov.Model
you can choose to optimize, compile and run inference on it or serialize it into a file for subsequent use.List of supported formats:
Files:
<INPUT_MODEL>.onnx
ov_model = convert_model("<INPUT_MODEL>.onnx") compiled_model = core.compile_model(ov_model, "AUTO")
For more details on conversion, refer to the guide and an example tutorial on this topic.
The
read_model()
method:List of supported formats:
Files:
<INPUT_MODEL>.onnx
ov_model = read_model("<INPUT_MODEL>.onnx") compiled_model = core.compile_model(ov_model, "AUTO")
The
compile_model()
method:List of supported formats:
Files:
<INPUT_MODEL>.onnx
compiled_model = core.compile_model("<INPUT_MODEL>.onnx", "AUTO")
For a guide on how to run inference, see how to Integrate OpenVINO™ with Your Application.
The
compile_model()
method:List of supported formats:
Files:
<INPUT_MODEL>.onnx
ov::CompiledModel compiled_model = core.compile_model("<INPUT_MODEL>.onnx", "AUTO");
For a guide on how to run inference, see how to Integrate OpenVINO™ with Your Application.
The
compile_model()
method:List of supported formats:
Files:
<INPUT_MODEL>.onnx
ov_compiled_model_t* compiled_model = NULL; ov_core_compile_model_from_file(core, "<INPUT_MODEL>.onnx", "AUTO", 0, &compiled_model);
For details on the conversion, refer to the article
The
convert_model()
method:You can use
mo
command-line tool to convert a model to IR. The obtained IR can then be read byread_model()
and inferred.List of supported formats:
Files:
<INPUT_MODEL>.onnx
mo --input_model <INPUT_MODEL>.onnx
For details on the conversion, refer to the article
The
convert_model()
method:When you use the
convert_model()
method, you have more control and you can specify additional adjustments forov.Model
. Theread_model()
andcompile_model()
methods are easier to use, however, they do not have such capabilities. Withov.Model
you can choose to optimize, compile and run inference on it or serialize it into a file for subsequent use.List of supported formats:
Files:
<INPUT_MODEL>.pdmodel
Python objects:
paddle.hapi.model.Model
paddle.fluid.dygraph.layers.Layer
paddle.fluid.executor.Executor
ov_model = convert_model("<INPUT_MODEL>.pdmodel") compiled_model = core.compile_model(ov_model, "AUTO")
For more details on conversion, refer to the guide and an example tutorial on this topic.
The
read_model()
method:List of supported formats:
Files:
<INPUT_MODEL>.pdmodel
ov_model = read_model("<INPUT_MODEL>.pdmodel") compiled_model = core.compile_model(ov_model, "AUTO")
The
compile_model()
method:List of supported formats:
Files:
<INPUT_MODEL>.pdmodel
compiled_model = core.compile_model("<INPUT_MODEL>.pdmodel", "AUTO")
For a guide on how to run inference, see how to Integrate OpenVINO™ with Your Application.
The
compile_model()
method:List of supported formats:
Files:
<INPUT_MODEL>.pdmodel
ov::CompiledModel compiled_model = core.compile_model("<INPUT_MODEL>.pdmodel", "AUTO");
For a guide on how to run inference, see how to Integrate OpenVINO™ with Your Application.
The
compile_model()
method:List of supported formats:
Files:
<INPUT_MODEL>.pdmodel
ov_compiled_model_t* compiled_model = NULL; ov_core_compile_model_from_file(core, "<INPUT_MODEL>.pdmodel", "AUTO", 0, &compiled_model);
For a guide on how to run inference, see how to Integrate OpenVINO™ with Your Application.
The
convert_model()
method:You can use
mo
command-line tool to convert a model to IR. The obtained IR can then be read byread_model()
and inferred.List of supported formats:
Files:
<INPUT_MODEL>.pdmodel
mo --input_model <INPUT_MODEL>.pdmodel
For details on the conversion, refer to the article.
To choose the best workflow for your application, read the Model Preparation section
Refer to the list of all supported conversion options in Conversion Parameters