This Jupyter notebook can be launched on-line, opening an interactive environment in a browser window.
You can also make a local installation. Choose one of the following options:
This notebook shows how to convert a MobileNetV3 model from
PaddleHub, pre-trained
on the ImageNet dataset, to OpenVINO IR.
It also shows how to perform classification inference on a sample image,
using OpenVINO
Runtime
and compares the results of the
PaddlePaddle model with the
IR model.
Note:youmayneedtorestartthekerneltouseupdatedpackages.Note:youmayneedtorestartthekerneltouseupdatedpackages.ERROR:pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.paddleclas2.6.0requireseasydict,whichisnotinstalled.paddleclas2.6.0requiresgast==0.3.3,butyouhavegast0.4.0whichisincompatible.paddleclas2.6.0requiresopencv-python<=4.6.0.66,butyouhaveopencv-python4.10.0.84whichisincompatible.Note:youmayneedtorestartthekerneltouseupdatedpackages.Note:youmayneedtorestartthekerneltouseupdatedpackages.
Set IMAGE_FILENAME to the filename of an image to use. Set
MODEL_NAME to the PaddlePaddle model to download from PaddleHub.
MODEL_NAME will also be the base name for the IR model. The notebook
is tested with the
MobileNetV3_large_x1_0
model. Other models may use different preprocessing methods and
therefore require some modification to get the same results on the
original and converted model.
First of all, we need to download and unpack model files. The first time
you run this notebook, the PaddlePaddle model is downloaded from
PaddleHub. This may take a while.
# Download the image from the openvino_notebooks storageimg=download_file("https://storage.openvinotoolkit.org/repositories/openvino_notebooks/data/data/image/coco_close.png",directory="data",)IMAGE_FILENAME=img.as_posix()MODEL_NAME="MobileNetV3_large_x1_0"MODEL_DIR=Path("model")ifnotMODEL_DIR.exists():MODEL_DIR.mkdir()MODEL_URL="https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/{}_infer.tar".format(MODEL_NAME)download_file(MODEL_URL,directory=MODEL_DIR)file=tarfile.open(MODEL_DIR/"{}_infer.tar".format(MODEL_NAME))res=file.extractall(MODEL_DIR)ifnotres:print(f'Model Extracted to "./{MODEL_DIR}".')else:print("Error Extracting the model. Please check the network.")
classifier.predict() takes an image file name, reads the image,
preprocesses the input, then returns the class labels and scores of the
image. Preprocessing the image is done behind the scenes. The
classification model returns an array with floating point values for
each of the 1000 ImageNet classes. The higher the value, the more
confident the network is that the class number corresponding to that
value (the index of that value in the network output array) is the class
number for the image.
To see PaddlePaddle’s implementation for the classification function and
for loading and preprocessing data, uncomment the next two cells.
# classifier??
# classifier.get_config()
The classifier.get_config() module shows the preprocessing
configuration for the model. It should show that images are normalized,
resized and cropped, and that the BGR image is converted to RGB before
propagating it through the network. In the next cell, we get the
classifier.predictror.preprocess_ops property that returns list of
preprocessing operations to do inference on the OpenVINO IR model using
the same method.
It is useful to show the output of the process_image() function, to
see the effect of cropping and resizing. Because of the normalization,
the colors will look strange, and matplotlib will warn about
clipping values.
pil_image=Image.open(IMAGE_FILENAME)processed_image=process_image(np.array(pil_image))print(f"Processed image shape: {processed_image.shape}")# Processed image is in (C,H,W) format, convert to (H,W,C) to show the imageplt.imshow(np.transpose(processed_image,(1,2,0)))
To decode the labels predicted by the model to names of classes, we need
to have a mapping between them. The model config contains information
about class_id_map_file, which stores such mapping. The code below
shows how to parse the mapping into a dictionary to use with the
OpenVINO model.
Call the OpenVINO Model Conversion API to convert the PaddlePaddle model
to OpenVINO IR, with FP32 precision. ov.convert_model function
accept path to PaddlePaddle model and returns OpenVINO Model class
instance which represents this model. Obtained model is ready to use and
loading on device using ov.compile_model or can be saved on disk
using ov.save_model function. See the Model Conversion
Guide
for more information about the Model Conversion API.
Load the IR model, get model information, load the image, do inference,
convert the inference to a meaningful result, and show the output. See
the OpenVINO Runtime API
Notebook for more information.
# Load OpenVINO Runtime and OpenVINO IR modelcore=ov.Core()model=core.read_model(model_xml)compiled_model=core.compile_model(model=model,device_name=device.value)# Get model outputoutput_layer=compiled_model.output(0)# Read, show, and preprocess input image# See the "Show Inference on PaddlePaddle Model" section for source of process_imageimage=Image.open(IMAGE_FILENAME)plt.imshow(image)input_image=process_image(np.array(image))[None,]# Do inferenceov_result=compiled_model([input_image])[output_layer][0]# find the top three valuestop_indices=np.argsort(ov_result)[-3:][::-1]top_scores=ov_result[top_indices]# Convert the inference results to class names, using the same labels as the PaddlePaddle classifierforindex,softmax_probabilityinzip(top_indices,top_scores):print(f"{class_id_map[index]}, {softmax_probability:.5f}")
Measure the time it takes to do inference on fifty images and compare
the result. The timing information gives an indication of performance.
For a fair comparison, we include the time it takes to process the
image. For more accurate benchmarking, use the OpenVINO benchmark
tool.
Note that many optimizations are possible to improve the performance.
num_images=50image=Image.open(fp=IMAGE_FILENAME)
importopenvino.propertiesasprops# Show device informationcore=ov.Core()devices=core.available_devicesfordevice_nameindevices:device_full_name=core.get_property(device_name,props.device.full_name)print(f"{device_name}: {device_full_name}")
CPU:Intel(R)Core(TM)i9-10920XCPU@3.50GHz
# Show inference speed on PaddlePaddle modelstart=time.perf_counter()for_inrange(num_images):result=next(classifier.predict(np.array(image)))end=time.perf_counter()time_ir=end-startprint(f"PaddlePaddle model on CPU: {time_ir/num_images:.4f} "f"seconds per image, FPS: {num_images/time_ir:.2f}\n")print("PaddlePaddle result:")class_names=result[0]["label_names"]scores=result[0]["scores"]forclass_name,softmax_probabilityinzip(class_names,scores):print(f"{class_name}, {softmax_probability:.5f}")plt.imshow(image);
# Show inference speed on OpenVINO IR modelcompiled_model=core.compile_model(model=model,device_name=device.value)output_layer=compiled_model.output(0)start=time.perf_counter()input_image=process_image(np.array(image))[None,]for_inrange(num_images):ie_result=compiled_model([input_image])[output_layer][0]top_indices=np.argsort(ie_result)[-5:][::-1]top_softmax=ie_result[top_indices]end=time.perf_counter()time_ir=end-startprint(f"OpenVINO IR model in OpenVINO Runtime ({device.value}): {time_ir/num_images:.4f} "f"seconds per image, FPS: {num_images/time_ir:.2f}")print()print("OpenVINO result:")forindex,softmax_probabilityinzip(top_indices,top_softmax):print(f"{class_id_map[index]}, {softmax_probability:.5f}")plt.imshow(image);