Live Inference and Benchmark CT-scan Data with OpenVINO™

This tutorial is also available as a Jupyter notebook that can be cloned directly from GitHub. See the installation guide for instructions to run this tutorial locally on Windows, Linux or macOS. To run without installing anything, click the launch binder button.

Binder Github

This tutorial is part of a series on how to train, optimize, quantize and show live inference on a medical segmentation model. The goal is to accelerate inference on a kidney segmentation model. The UNet model is trained from scratch, and the data is from Kits19.

This tutorial shows how to benchmark performance of the model and show live inference with async API and MULTI plugin in OpenVINO.

This notebook needs a quantized OpenVINO IR model and images from the KiTS-19 dataset, converted to 2D images. (To learn how the model is quantized, see the Convert and Quantize a UNet Model and Show Live Inference tutorial.)

This notebook provides a pre-trained model, trained for 20 epochs with the full KiTS-19 frames dataset, which has an F1 score on the validation set of 0.9. The training code is available in the PyTorch Monai Training notebook.

For demonstration purposes, this tutorial will download one converted CT scan to use for inference.


import os
import sys
import zipfile
from pathlib import Path

import numpy as np
from monai.transforms import LoadImage
from openvino.inference_engine import IECore

from models.custom_segmentation import SegmentationModel
from notebook_utils import benchmark_model, download_file, show_live_inference


To use the pre-trained models, set IR_PATH to "pretrained_model/unet44.xml" and COMPRESSED_MODEL_PATH to "pretrained_model/quantized_unet44.xml". To use a model that you trained or optimized yourself, adjust the model paths.

# The directory that contains the IR model (xml and bin) files.
MODEL_PATH = "pretrained_model/quantized_unet_kits19.xml"
# Uncomment the next line to use the FP16 model instead of the quantized model.
# MODEL_PATH = "pretrained_model/unet_kits19.xml"

Benchmark Model Performance

To measure the inference performance of the IR model, use Benchmark Tool - an inference performance measurement tool in OpenVINO. Benchmark tool is a command-line application that can be run in the notebook with ! benchmark_app or %sx benchmark_app commands.

This tutorial uses a wrapper function from Notebook Utils. It prints the benchmark_app command with the chosen parameters.

NOTE: For the most accurate performance estimation, it is recommended to run benchmark_app in a terminal/command prompt after closing other applications. Run benchmark_app --help to see all command-line options.

ie = IECore()
# By default, benchmark on MULTI:CPU,GPU if a GPU is available, otherwise on CPU.
device = "MULTI:CPU,GPU" if "GPU" in ie.available_devices else "CPU"
# Uncomment one of the options below to benchmark on other devices.
# device = "GPU"
# device = "CPU"
# device = "AUTO"
# Benchmark model
benchmark_model(model_path=MODEL_PATH, device=device, seconds=15)

Benchmark quantized_unet_kits19.xml with CPU for 15 seconds with async inference

Benchmark command: benchmark_app -m pretrained_model/quantized_unet_kits19.xml -d CPU -t 15 -api async -b 1 -cdir model_cache

Traceback (most recent call last):
  File "/opt/home/k8sworker/cibuilds/ov-notebook/OVNotebookOps-189/.workspace/scm/ov-notebook/.venv/bin/benchmark_app", line 5, in <module>
    from import main
  File "/opt/home/k8sworker/cibuilds/ov-notebook/OVNotebookOps-189/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/tools/benchmark/", line 8, in <module>
    from openvino.runtime import Dimension
  File "/opt/home/k8sworker/cibuilds/ov-notebook/OVNotebookOps-189/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/runtime/", line 18, in <module>
    from openvino.pyopenvino import Dimension
ImportError: Check 'versions_compatible' failed at /home/jenkins/agent/workspace/private-ci/ie/build-linux-ubuntu18/b/repos/openvino/src/bindings/python/src/pyopenvino/pyopenvino.cpp:85:
OpenVINO Python version (2022.1.0-7019-cdb9bec7210-releases/2022/1) mismatches with OpenVINO Runtime library version (2022.2.0-7557-72505b1d82f). It can happen if you have 2 or more different versions of OpenVINO installed in system. Please ensure that environment variables (e.g. PATH, PYTHONPATH) are set correctly so that OpenVINO Runtime and Python libraries point to same release.

Device: Intel(R) Core(TM) i9-10920X CPU @ 3.50GHz

Download and Prepare Data

Download one validation video for live inference.

This tutorial reuses the KitsDataset class that was also used in the training and quantization notebook that will be released later.

The data is expected in BASEDIR. The BASEDIR directory should contain the case_00000 to case_00299 subdirectories. If the data for the case specified above does not already exist, it will be downloaded and extracted in the next cell.

# Directory that contains the CT scan data. This directory should contain subdirectories
# case_00XXX where XXX is between 000 and 299.
BASEDIR = Path("kits19_frames_1")
# The CT scan case number. For example: 16 for data from the case_00016 directory.
# Currently only 117 is supported.
CASE = 117

case_path = BASEDIR / f"case_{CASE:05d}"

if not case_path.exists():
    filename = download_file(
    with zipfile.ZipFile(filename, "r") as zip_ref:
    os.remove(filename)  # remove zipfile
    print(f"Downloaded and extracted data for case_{CASE:05d}")
    print(f"Data for case_{CASE:05d} exists")   0%|          | 0.00/5.48M [00:00<?, ?B/s]
Downloaded and extracted data for case_00117

Show Live Inference

To show live inference on the model in the notebook, use the asynchronous processing feature of OpenVINO OpenVINO Runtime.

If you use a GPU device, with device="GPU" or device="MULTI:CPU,GPU" to do inference on an integrated graphics card, model loading will be slow the first time you run this code. The model will be cached, so after the first time model loading will be faster. For more information on OpenVINO Runtime, including Model Caching, refer to the OpenVINO API tutorial.

The show_live_inference function from Notebook Utils is used to show live inference. This function uses AsyncPipeline and Model API from Open Model Zoo to perform asynchronous inference. After inference on the specified CT scan has completed, the total time and throughput (fps), including preprocessing and displaying, will be printed.

Load Model and List of Image Files

Load the segmentation model to OpenVINO Runtime with SegmentationModel, based on the Model API from Open Model Zoo. This model implementation includes pre and post processing for the model. For SegmentationModel this includes the code to create an overlay of the segmentation mask on the original image/frame. Uncomment the next cell to see the implementation.

# SegmentationModel??
ie = IECore()
segmentation_model = SegmentationModel(
    ie=ie, model_path=Path(MODEL_PATH), sigmoid=True, rotate_and_flip=True
image_paths = sorted(case_path.glob("imaging_frames/*jpg"))

print(f"{}, {len(image_paths)} images")
case_00117, 69 images

Show Inference

In the next cell, run the show live_inference function, which loads segmentation_model to the specified device (using caching for faster model loading on GPU devices), loads the images, performs inference, and displays the results on the frames loaded in images in real-time.

Use the reader=LoadImage() function to read the images in the same way as in the training tutorial.

# Possible options for device include "CPU", "GPU", "AUTO", "MULTI".
device = "MULTI:CPU,GPU" if "GPU" in ie.available_devices else "CPU"
reader = LoadImage(image_only=True, dtype=np.uint8)

    ie=ie, image_paths=image_paths, model=segmentation_model, device=device, reader=reader
Loaded model to CPU in 0.22 seconds.
Total time for 68 frames: 1.94 seconds, fps:35.51