Converting a TensorFlow Lite Model¶
To convert a TensorFlow Lite model, use the
mo script and specify the path to the input
.tflite model file:
mo --input_model <INPUT_MODEL>.tflite
TensorFlow Lite models are supported via FrontEnd API. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the inference example for more details. Using
convert_model is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions.
convert_model() method returns
ov.Model that you can optimize, compile, or save to a file for subsequent use.
Supported TensorFlow Lite Layers¶
For the list of supported standard layers, refer to the Supported Operations page.