Convert a Tensorflow Lite Model to OpenVINO™#

This Jupyter notebook can be launched on-line, opening an interactive environment in a browser window. You can also make a local installation. Choose one of the following options:

Google ColabGithub

TensorFlow Lite, often referred to as TFLite, is an open source library developed for deploying machine learning models to edge devices.

This short tutorial shows how to convert a TensorFlow Lite EfficientNet-Lite-B0 image classification model to OpenVINO Intermediate Representation (OpenVINO IR) format, using Model Converter. After creating the OpenVINO IR, load the model in OpenVINO Runtime and do inference with a sample image.

Table of contents:

Installation Instructions#

This is a self-contained example that relies solely on its own code.

We recommend running the notebook in a virtual environment. You only need a Jupyter server to start. For details, please refer to Installation Guide.

Preparation#

Install requirements#

%pip install -q "openvino>=2023.1.0"
%pip install -q opencv-python requests tqdm kagglehub Pillow

# Fetch `notebook_utils` module
import requests

r = requests.get(
    url="https://raw.githubusercontent.com/openvinotoolkit/openvino_notebooks/latest/utils/notebook_utils.py",
)

open("notebook_utils.py", "w").write(r.text)

# Read more about telemetry collection at https://github.com/openvinotoolkit/openvino_notebooks?tab=readme-ov-file#-telemetry
from notebook_utils import collect_telemetry

collect_telemetry("tflite-to-openvino.ipynb")

Imports#

from pathlib import Path
import numpy as np
from PIL import Image
import openvino as ov

from notebook_utils import download_file, load_image, device_widget

Download TFLite model#

import kagglehub

model_dir = kagglehub.model_download("tensorflow/efficientnet/tfLite/lite0-fp32")
tflite_model_path = Path(model_dir) / "2.tflite"

ov_model_path = tflite_model_path.with_suffix(".xml")

Convert a Model to OpenVINO IR Format#

To convert the TFLite model to OpenVINO IR, model conversion Python API can be used. ov.convert_model function accepts the path to the TFLite model and returns an OpenVINO Model class instance which represents this model. The obtained model is ready to use and to be loaded on a device using ov.compile_model or can be saved on a disk using ov.save_model function, reducing loading time for next running. By default, model weights are compressed to FP16 during serialization by ov.save_model. For more information about model conversion, see this page. For TensorFlow Lite models support, refer to this tutorial.

ov_model = ov.convert_model(tflite_model_path)
ov.save_model(ov_model, ov_model_path)
print(f"Model {tflite_model_path} successfully converted and saved to {ov_model_path}")
Model model/efficientnet_lite0_fp32_2.tflite successfully converted and saved to model/efficientnet_lite0_fp32_2.xml

Load model using OpenVINO TensorFlow Lite Frontend#

TensorFlow Lite models are supported via FrontEnd API. You may skip conversion to IR and read models directly by OpenVINO runtime API. For more examples supported formats reading via Frontend API, please look this tutorial.

core = ov.Core()

ov_model = core.read_model(tflite_model_path)

Run OpenVINO model inference#

We can find information about model input preprocessing in its description on TensorFlow Hub.

image = load_image("coco_bricks.png", "https://storage.openvinotoolkit.org/repositories/openvino_notebooks/data/data/image/coco_bricks.png")
# load_image reads the image in BGR format, [:,:,::-1] reshape transfroms it to RGB
image = Image.fromarray(image[:, :, ::-1])
resized_image = image.resize((224, 224))
input_tensor = np.expand_dims((np.array(resized_image).astype(np.float32) - 127) / 128, 0)

Select inference device#

select device from dropdown list for running inference using OpenVINO

device = device_widget()

device
Dropdown(description='Device:', index=1, options=('CPU', 'AUTO'), value='AUTO')
compiled_model = core.compile_model(ov_model, device.value)
predicted_scores = compiled_model(input_tensor)[0]
imagenet_classes_file_path = download_file("https://storage.openvinotoolkit.org/repositories/openvino_notebooks/data/data/datasets/imagenet/imagenet_2012.txt")
imagenet_classes = open(imagenet_classes_file_path).read().splitlines()

top1_predicted_cls_id = np.argmax(predicted_scores)
top1_predicted_score = predicted_scores[0][top1_predicted_cls_id]
predicted_label = imagenet_classes[top1_predicted_cls_id]

display(image.resize((640, 512)))
print(f"Predicted label: {predicted_label} with probability {top1_predicted_score :2f}")
'imagenet_2012.txt' already exists.
../_images/tflite-to-openvino-with-output_16_1.png
Predicted label: n02109047 Great Dane with probability 0.715318

Estimate Model Performance#

Benchmark Tool is used to measure the inference performance of the model on CPU and GPU.

NOTE: For more accurate performance, it is recommended to run benchmark_app in a terminal/command prompt after closing other applications. Run benchmark_app -m model.xml -d CPU to benchmark async inference on CPU for one minute. Change CPU to GPU to benchmark on GPU. Run benchmark_app --help to see an overview of all command-line options.

print(f"Benchmark model inference on {device.value}")
!benchmark_app -m $ov_model_path -d $device.value -t 15