Live Object Detection with OpenVINO™¶
This tutorial is also available as a Jupyter notebook that can be cloned directly from GitHub. See the installation guide for instructions to run this tutorial locally on Windows, Linux or macOS.
This notebook demonstrates live object detection with OpenVINO, using the SSDLite MobileNetV2 from Open Model Zoo. Final part of this notebook shows live inference results from a webcam. Additionally, you can also upload a video file.
NOTE: To use this notebook with a webcam, you need to run the notebook on a computer with a webcam. If you run the notebook on a server, the webcam will not work. However, you can still do inference on a video.
Imports¶
import collections
import os
import sys
import time
import cv2
import numpy as np
from IPython import display
from openvino.runtime import Core
sys.path.append("../utils")
import notebook_utils as utils
The Model¶
Download the Model¶
Use omz_downloader
, which is a command-line tool from the
openvino-dev
package. It automatically creates a directory structure
and downloads the selected model. This step is skipped if the model is
already downloaded. The selected model comes from the public directory,
which means it must be converted into OpenVINO Intermediate
Representation (OpenVINO IR).
If you want to download another model (ssdlite_mobilenet_v2
,
ssd_mobilenet_v1_coco
, ssd_mobilenet_v2_coco
,
ssd_resnet50_v1_fpn_coco
, ssd_mobilenet_v1_fpn_coco
) , replace
the name of the model in the code below.
NOTE: Using a model outside the list can require different pre- and post-processing.
# A directory where the model will be downloaded.
base_model_dir = "model"
# The name of the model from Open Model Zoo
model_name = "ssdlite_mobilenet_v2"
download_command = f"omz_downloader " \
f"--name {model_name} " \
f"--output_dir {base_model_dir} " \
f"--cache_dir {base_model_dir}"
! $download_command
################|| Downloading ssdlite_mobilenet_v2 ||################
========== Downloading model/public/ssdlite_mobilenet_v2/ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz
========== Unpacking model/public/ssdlite_mobilenet_v2/ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz
Convert the Model¶
The pre-trained model is in TensorFlow format. To use it with OpenVINO,
convert it to OpenVINO IR format. Use Model Converter
(omz_converter
), which is another command-line tool from the
openvino-dev
package. If you do not specify a precision, the model
will be converted many times to all available precisions (FP32
and
FP16
in this case). Every conversion should take up to 2 minutes. If
the model has been already converted, this step is skipped.
NOTE: Each model may have different precisions available.
precision = "FP16"
# The output path for the conversion.
converted_model_path = f"model/public/{model_name}/{precision}/{model_name}.xml"
if not os.path.exists(converted_model_path):
convert_command = f"omz_converter " \
f"--name {model_name} " \
f"--download_dir {base_model_dir} " \
f"--precisions {precision}"
! $convert_command
========== Converting ssdlite_mobilenet_v2 to IR (FP16)
Conversion command: /opt/home/k8sworker/cibuilds/ov-notebook/OVNotebookOps-275/.workspace/scm/ov-notebook/.venv/bin/python -- /opt/home/k8sworker/cibuilds/ov-notebook/OVNotebookOps-275/.workspace/scm/ov-notebook/.venv/bin/mo --framework=tf --data_type=FP16 --output_dir=model/public/ssdlite_mobilenet_v2/FP16 --model_name=ssdlite_mobilenet_v2 --input=image_tensor --reverse_input_channels --output=detection_scores,detection_boxes,num_detections --transformations_config=/opt/home/k8sworker/cibuilds/ov-notebook/OVNotebookOps-275/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/tools/mo/front/tf/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config=model/public/ssdlite_mobilenet_v2/ssdlite_mobilenet_v2_coco_2018_05_09/pipeline.config --input_model=model/public/ssdlite_mobilenet_v2/ssdlite_mobilenet_v2_coco_2018_05_09/frozen_inference_graph.pb '--layout=image_tensor(NHWC)' '--input_shape=[1, 300, 300, 3]'
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /opt/home/k8sworker/cibuilds/ov-notebook/OVNotebookOps-275/.workspace/scm/ov-notebook/notebooks/401-object-detection-webcam/model/public/ssdlite_mobilenet_v2/ssdlite_mobilenet_v2_coco_2018_05_09/frozen_inference_graph.pb
- Path for generated IR: /opt/home/k8sworker/cibuilds/ov-notebook/OVNotebookOps-275/.workspace/scm/ov-notebook/notebooks/401-object-detection-webcam/model/public/ssdlite_mobilenet_v2/FP16
- IR output name: ssdlite_mobilenet_v2
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: image_tensor
- Output layers: detection_scores,detection_boxes,num_detections
- Input shapes: [1, 300, 300, 3]
- Source layout: Not specified
- Target layout: Not specified
- Layout: image_tensor(NHWC)
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP16
- Enable fusing: True
- User transformations: Not specified
- Reverse input channels: True
- Enable IR generation for fixed input shape: False
- Use the transformations config file: /opt/home/k8sworker/cibuilds/ov-notebook/OVNotebookOps-275/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/tools/mo/front/tf/ssd_v2_support.json
Advanced parameters:
- Force the usage of legacy Frontend of Model Optimizer for model conversion into IR: False
- Force the usage of new Frontend of Model Optimizer for model conversion into IR: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: /opt/home/k8sworker/cibuilds/ov-notebook/OVNotebookOps-275/.workspace/scm/ov-notebook/notebooks/401-object-detection-webcam/model/public/ssdlite_mobilenet_v2/ssdlite_mobilenet_v2_coco_2018_05_09/pipeline.config
- Use the config file: None
OpenVINO runtime found in: /opt/home/k8sworker/cibuilds/ov-notebook/OVNotebookOps-275/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino
OpenVINO runtime version: 2022.2.0-7713-af16ea1d79a-releases/2022/2
Model Optimizer version: 2022.2.0-7713-af16ea1d79a-releases/2022/2
[ WARNING ] The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /opt/home/k8sworker/cibuilds/ov-notebook/OVNotebookOps-275/.workspace/scm/ov-notebook/notebooks/401-object-detection-webcam/model/public/ssdlite_mobilenet_v2/FP16/ssdlite_mobilenet_v2.xml
[ SUCCESS ] BIN file: /opt/home/k8sworker/cibuilds/ov-notebook/OVNotebookOps-275/.workspace/scm/ov-notebook/notebooks/401-object-detection-webcam/model/public/ssdlite_mobilenet_v2/FP16/ssdlite_mobilenet_v2.bin
[ SUCCESS ] Total execution time: 34.02 seconds.
[ SUCCESS ] Memory consumed: 543 MB.
[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.
Find more information about API v2.0 and IR v11 at https://docs.openvino.ai
Load the Model¶
Downloaded models are located in a fixed structure, which indicates a vendor (intel or public), the name of the model and a precision.
Only a few lines of code are required to run the model. First,
initialize OpenVINO Runtime. Then, read the network architecture and
model weights from the .bin
and .xml
files to compile for the
desired device. If you choose GPU
you need to wait for a while, as
the startup time is much longer than in the case of CPU
.
There is a possibility to allow OpenVINO to decide which hardware offers
the best performance. In that case, just use AUTO
. Remember that for
most cases the best hardware is GPU
(better performance, but longer
startup time).
# Initialize OpenVINO Runtime.
ie_core = Core()
# Read the network and corresponding weights from a file.
model = ie_core.read_model(model=converted_model_path)
# Compile the model for CPU (you can choose manually CPU, GPU, MYRIAD etc.)
# or let the engine choose the best available device (AUTO).
compiled_model = ie_core.compile_model(model=model, device_name="CPU")
# Get the input and output nodes.
input_layer = compiled_model.input(0)
output_layer = compiled_model.output(0)
# Get the input size.
height, width = list(input_layer.shape)[1:3]
Input and output layers have the names of the input node and output node respectively. In the case of SSDLite MobileNetV2, there is 1 input and 1 output.
input_layer.any_name, output_layer.any_name
('image_tensor', 'detection_boxes')
Processing¶
Process Results¶
First, list all available classes and create colors for them. Then, in
the post-process stage, transform boxes with normalized coordinates
[0, 1]
into boxes with pixel coordinates [0, image_size_in_px]
.
Afterward, use non-maximum
suppression
to reject overlapping detections and those below the probability
threshold (0.5). Finally, draw boxes and labels inside them.
# https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/
classes = [
"background", "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train",
"truck", "boat", "traffic light", "fire hydrant", "street sign", "stop sign",
"parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant",
"bear", "zebra", "giraffe", "hat", "backpack", "umbrella", "shoe", "eye glasses",
"handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite",
"baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle",
"plate", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple",
"sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair",
"couch", "potted plant", "bed", "mirror", "dining table", "window", "desk", "toilet",
"door", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven",
"toaster", "sink", "refrigerator", "blender", "book", "clock", "vase", "scissors",
"teddy bear", "hair drier", "toothbrush", "hair brush"
]
# Colors for the classes above (Rainbow Color Map).
colors = cv2.applyColorMap(
src=np.arange(0, 255, 255 / len(classes), dtype=np.float32).astype(np.uint8),
colormap=cv2.COLORMAP_RAINBOW,
).squeeze()
def process_results(frame, results, thresh=0.6):
# The size of the original frame.
h, w = frame.shape[:2]
# The 'results' variable is a [1, 1, 100, 7] tensor.
results = results.squeeze()
boxes = []
labels = []
scores = []
for _, label, score, xmin, ymin, xmax, ymax in results:
# Create a box with pixels coordinates from the box with normalized coordinates [0,1].
boxes.append(
tuple(map(int, (xmin * w, ymin * h, (xmax - xmin) * w, (ymax - ymin) * h)))
)
labels.append(int(label))
scores.append(float(score))
# Apply non-maximum suppression to get rid of many overlapping entities.
# See https://paperswithcode.com/method/non-maximum-suppression
# This algorithm returns indices of objects to keep.
indices = cv2.dnn.NMSBoxes(
bboxes=boxes, scores=scores, score_threshold=thresh, nms_threshold=0.6
)
# If there are no boxes.
if len(indices) == 0:
return []
# Filter detected objects.
return [(labels[idx], scores[idx], boxes[idx]) for idx in indices.flatten()]
def draw_boxes(frame, boxes):
for label, score, box in boxes:
# Choose color for the label.
color = tuple(map(int, colors[label]))
# Draw a box.
x2 = box[0] + box[2]
y2 = box[1] + box[3]
cv2.rectangle(img=frame, pt1=box[:2], pt2=(x2, y2), color=color, thickness=3)
# Draw a label name inside the box.
cv2.putText(
img=frame,
text=f"{classes[label]} {score:.2f}",
org=(box[0] + 10, box[1] + 30),
fontFace=cv2.FONT_HERSHEY_COMPLEX,
fontScale=frame.shape[1] / 1000,
color=color,
thickness=1,
lineType=cv2.LINE_AA,
)
return frame
Main Processing Function¶
Run object detection on the specified source. Either a webcam or a video file.
# Main processing function to run object detection.
def run_object_detection(source=0, flip=False, use_popup=False, skip_first_frames=0):
player = None
try:
# Create a video player to play with target fps.
player = utils.VideoPlayer(
source=source, flip=flip, fps=30, skip_first_frames=skip_first_frames
)
# Start capturing.
player.start()
if use_popup:
title = "Press ESC to Exit"
cv2.namedWindow(
winname=title, flags=cv2.WINDOW_GUI_NORMAL | cv2.WINDOW_AUTOSIZE
)
processing_times = collections.deque()
while True:
# Grab the frame.
frame = player.next()
if frame is None:
print("Source ended")
break
# If the frame is larger than full HD, reduce size to improve the performance.
scale = 1280 / max(frame.shape)
if scale < 1:
frame = cv2.resize(
src=frame,
dsize=None,
fx=scale,
fy=scale,
interpolation=cv2.INTER_AREA,
)
# Resize the image and change dims to fit neural network input.
input_img = cv2.resize(
src=frame, dsize=(width, height), interpolation=cv2.INTER_AREA
)
# Create a batch of images (size = 1).
input_img = input_img[np.newaxis, ...]
# Measure processing time.
start_time = time.time()
# Get the results.
results = compiled_model([input_img])[output_layer]
stop_time = time.time()
# Get poses from network results.
boxes = process_results(frame=frame, results=results)
# Draw boxes on a frame.
frame = draw_boxes(frame=frame, boxes=boxes)
processing_times.append(stop_time - start_time)
# Use processing times from last 200 frames.
if len(processing_times) > 200:
processing_times.popleft()
_, f_width = frame.shape[:2]
# Mean processing time [ms].
processing_time = np.mean(processing_times) * 1000
fps = 1000 / processing_time
cv2.putText(
img=frame,
text=f"Inference time: {processing_time:.1f}ms ({fps:.1f} FPS)",
org=(20, 40),
fontFace=cv2.FONT_HERSHEY_COMPLEX,
fontScale=f_width / 1000,
color=(0, 0, 255),
thickness=1,
lineType=cv2.LINE_AA,
)
# Use this workaround if there is flickering.
if use_popup:
cv2.imshow(winname=title, mat=frame)
key = cv2.waitKey(1)
# escape = 27
if key == 27:
break
else:
# Encode numpy array to jpg.
_, encoded_img = cv2.imencode(
ext=".jpg", img=frame, params=[cv2.IMWRITE_JPEG_QUALITY, 100]
)
# Create an IPython image.
i = display.Image(data=encoded_img)
# Display the image in this notebook.
display.clear_output(wait=True)
display.display(i)
# ctrl-c
except KeyboardInterrupt:
print("Interrupted")
# any different error
except RuntimeError as e:
print(e)
finally:
if player is not None:
# Stop capturing.
player.stop()
if use_popup:
cv2.destroyAllWindows()
Run¶
Run Live Object Detection¶
Use a webcam as the video input. By default, the primary webcam is set
with source=0
. If you have multiple webcams, each one will be
assigned a consecutive number starting at 0. Set flip=True
when
using a front-facing camera. Some web browsers, especially Mozilla
Firefox, may cause flickering. If you experience flickering, set
use_popup=True
.
NOTE: To use this notebook with a webcam, you need to run the notebook on a computer with a webcam. If you run the notebook on a server (for example, Binder), the webcam will not work. Popup mode may not work if you run this notebook on a remote computer (for example, Binder).
Run the object detection:
run_object_detection(source=0, flip=True, use_popup=False)
Cannot open camera 0
Run Object Detection on a Video File¶
If you do not have a webcam, you can still run this demo with a video file. Any format supported by OpenCV will work.
video_file = "../201-vision-monodepth/data/Coco Walking in Berkeley.mp4"
run_object_detection(source=video_file, flip=False, use_popup=False)

Source ended