Converting a TensorFlow Lite Model

To convert an ONNX model, run model conversion with the path to the .tflite model file:

import openvino as ov
ovc your_model_file.tflite


TensorFlow Lite model file can be loaded by openvino.Core.read_model or openvino.Core.compile_model methods by OpenVINO runtime API without preparing OpenVINO IR first. Refer to the inference example for more details. Using openvino.convert_model is still recommended if model load latency matters for the inference application.

Supported TensorFlow Lite Layers

For the list of supported standard layers, refer to the Supported Operations page.

Supported TensorFlow Lite Models

More than eighty percent of public TensorFlow Lite models are supported from open sources TensorFlow Hub and MediaPipe. Unsupported models usually have custom TensorFlow Lite operations.