Converting a TensorFlow Lite Model

To convert a TensorFlow Lite model, use the mo script and specify the path to the input .tflite model file:

mo --input_model <INPUT_MODEL>.tflite

TensorFlow Lite models are supported via FrontEnd API. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the inference example for more details. Using convert_model is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions.

Important

The convert_model() method returns ov.model that you can optimize, compile, or serialize into a file for subsequent use.

Supported TensorFlow Lite Layers

For the list of supported standard layers, refer to the Supported Operations page.

Supported TensorFlow Lite Models

More than eighty percent of public TensorFlow Lite models are supported from open sources TensorFlow Hub and MediaPipe. Unsupported models usually have custom TensorFlow Lite operations.