Convert a TensorFlow Model to OpenVINO

This tutorial is also available as a Jupyter notebook that can be cloned directly from GitHub. See the installation guide for instructions to run this tutorial locally on Windows, Linux or macOS. To run without installing anything, click the launch binder button.

Binder Github

This short tutorial shows how to convert a TensorFlow MobileNetV3 image classification model to OpenVINO’s Intermediate Representation (IR) format using the Model Optimizer tool. After creating the IR, we load the model in OpenVINO’s Inference Engine and perform inference with a sample image.

Imports

import time
from pathlib import Path

import cv2
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import Markdown
from openvino.inference_engine import IECore

Settings

# The paths of the source and converted models
model_path = Path("model/v3-small_224_1.0_float.pb")
ir_path = Path(model_path).with_suffix(".xml")

Convert the Model to OpenVINO IR Format

Convert TensorFlow Model to OpenVINO IR Format

Call the OpenVINO Model Optimizer tool to convert the TensorFlow model to OpenVINO IR, with FP16 precision. The models are saved to the current directory. We add the mean values to the model and scale the output with the standard deviation with --scale_values. With these options, it is not necessary to normalize input data before propagating it through the network. The original model expects input images in RGB format. The converted model also expects images in RGB format. If you want the converted model to work with BGR images, you can use the --reverse-input-channels option. See the Model Optimizer Developer Guide for more information about Model Optimizer, including a description of the command line options. Check the model documentation for information about the model, including input shape, expected color order and mean values.

We first construct the command for Model Optimizer, and then execute this command in the notebook by prepending the command with a !. There may be some errors or warnings in the output. Model Optimization was succesful if the last lines of the output include [ SUCCESS ] Generated IR version 10 model.

# Construct the command for Model Optimizer
mo_command = f"""mo
                 --input_model "{model_path}"
                 --input_shape "[1,224,224,3]"
                 --mean_values="[127.5,127.5,127.5]"
                 --scale_values="[127.5]"
                 --data_type FP16
                 --output_dir "{model_path.parent}"
                 """
mo_command = " ".join(mo_command.split())
print("Model Optimizer command to convert TensorFlow to OpenVINO:")
display(Markdown(f"`{mo_command}`"))
Model Optimizer command to convert TensorFlow to OpenVINO:

mo --input_model "model/v3-small_224_1.0_float.pb" --input_shape "[1,224,224,3]" --mean_values="[127.5,127.5,127.5]" --scale_values="[127.5]" --data_type FP16 --output_dir "model"

# Run Model Optimizer if the IR model file does not exist
if not ir_path.exists():
    print("Exporting TensorFlow model to IR... This may take a few minutes.")
    ! $mo_command
else:
    print(f"IR model {ir_path} already exists.")
Exporting TensorFlow model to IR... This may take a few minutes.
Model Optimizer arguments:
Common parameters:
    - Path to the Input Model:  /opt/home/k8sworker/cibuilds/ov-notebook/OVNotebookOps-80/.workspace/scm/ov-notebook/notebooks/101-tensorflow-to-openvino/model/v3-small_224_1.0_float.pb
    - Path for generated IR:    /opt/home/k8sworker/cibuilds/ov-notebook/OVNotebookOps-80/.workspace/scm/ov-notebook/notebooks/101-tensorflow-to-openvino/model
    - IR output name:   v3-small_224_1.0_float
    - Log level:    ERROR
    - Batch:    Not specified, inherited from the model
    - Input layers:     Not specified, inherited from the model
    - Output layers:    Not specified, inherited from the model
    - Input shapes:     [1,224,224,3]
    - Mean values:  [127.5,127.5,127.5]
    - Scale values:     [127.5]
    - Scale factor:     Not specified
    - Precision of IR:  FP16
    - Enable fusing:    True
    - Enable grouped convolutions fusing:   True
    - Move mean values to preprocess section:   None
    - Reverse input channels:   False
TensorFlow specific parameters:
    - Input model in text protobuf format:  False
    - Path to model dump for TensorBoard:   None
    - List of shared libraries with TensorFlow custom layers implementation:    None
    - Update the configuration file with input/output node names:   None
    - Use configuration file used to generate the model with Object Detection API:  None
    - Use the config file:  None
    - Inference Engine found in:    /opt/home/k8sworker/cibuilds/ov-notebook/OVNotebookOps-80/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino
Inference Engine version:   2021.4.2-3976-0943ed67223-refs/pull/539/head
Model Optimizer version:    2021.4.2-3976-0943ed67223-refs/pull/539/head
2022-02-18 22:19:29.629858: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
/opt/home/k8sworker/cibuilds/ov-notebook/OVNotebookOps-80/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py:22: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
  import imp
/opt/home/k8sworker/cibuilds/ov-notebook/OVNotebookOps-80/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/numpy/lib/function_base.py:792: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
  return array(a, order=order, subok=subok, copy=True)
[ SUCCESS ] Generated IR version 10 model.
[ SUCCESS ] XML file: /opt/home/k8sworker/cibuilds/ov-notebook/OVNotebookOps-80/.workspace/scm/ov-notebook/notebooks/101-tensorflow-to-openvino/model/v3-small_224_1.0_float.xml
[ SUCCESS ] BIN file: /opt/home/k8sworker/cibuilds/ov-notebook/OVNotebookOps-80/.workspace/scm/ov-notebook/notebooks/101-tensorflow-to-openvino/model/v3-small_224_1.0_float.bin
[ SUCCESS ] Total execution time: 11.07 seconds.
[ SUCCESS ] Memory consumed: 382 MB.
It's been a while, check for a new version of Intel(R) Distribution of OpenVINO(TM) toolkit here https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html?cid=other&source=prod&campid=ww_2022_bu_IOTG_OpenVINO-2022-1&content=upg_all&medium=organic or on the GitHub*

Test Inference on the Converted Model

Load the Model

ie = IECore()
net = ie.read_network(model=ir_path, weights=ir_path.with_suffix(".bin"))
exec_net = ie.load_network(network=net, device_name="CPU")

Get Model Information

input_key = list(exec_net.input_info)[0]
output_key = list(exec_net.outputs.keys())[0]
network_input_shape = exec_net.input_info[input_key].tensor_desc.dims

Load an Image

Load an image, resize it, and convert it to the input shape of the network.

# The MobileNet network expects images in RGB format
image = cv2.cvtColor(cv2.imread(filename="data/coco.jpg"), code=cv2.COLOR_BGR2RGB)

# Resize image to network input image shape
resized_image = cv2.resize(src=image, dsize=(224, 224))

# Transpose image to network input shape
input_image = np.reshape(resized_image, network_input_shape) / 255
input_image = np.expand_dims(np.transpose(resized_image, (2, 0, 1)), 0)
plt.imshow(image);
../_images/101-tensorflow-to-openvino-with-output_14_0.png

Do Inference

result = exec_net.infer(inputs={input_key: input_image})[output_key]
result_index = np.argmax(result)
# Convert the inference result to a class name.
imagenet_classes = open("utils/imagenet_2012.txt").read().splitlines()

# The model description states that for this model, class 0 is background,
# so we add background at the beginning of imagenet_classes
imagenet_classes = ['background'] + imagenet_classes

imagenet_classes[result_index]
'n02099267 flat-coated retriever'

Timing

Measure the time it takes to do inference on thousand images. This gives an indication of performance. For more accurate benchmarking, use the OpenVINO benchmark tool. Note that many optimizations are possible to improve the performance.

num_images = 1000
start = time.perf_counter()
for _ in range(num_images):
    exec_net.infer(inputs={input_key: input_image})
end = time.perf_counter()
time_ir = end - start
print(
    f"IR model in Inference Engine/CPU: {time_ir/num_images:.4f} "
    f"seconds per image, FPS: {num_images/time_ir:.2f}"
)
IR model in Inference Engine/CPU: 0.0011 seconds per image, FPS: 870.35