Person Tracking with OpenVINO™

This tutorial is also available as a Jupyter notebook that can be cloned directly from GitHub. See the installation guide for instructions to run this tutorial locally on Windows, Linux or macOS.


This notebook demonstrates live person tracking with OpenVINO: it reads frames from an input video sequence, detects people in the frames, uniquely identify each one of them and track all of them until they leave the frame. We use the Deep SORT algorithm to perform object tracking, an extension to SORT (Simple Real time Tracker).

Detection vs Tracking

  • In object detection, we detect an object in a frame, put a bounding box or a mask around it and classify the object. Note that, the job of the detector ends here. It processes each frame independently and identifies numerous objects in that particular frame.

  • An object tracker on the other hand needs to track a particular object across the entire video. If the detector detects 3 cars in the frame, the object tracker has to identify the 3 separate detections and needs to track it across the subsequent frames (with the help of a unique ID).


Deep SORT can be defined as the tracking algorithm which tracks objects not only based on the velocity and motion of the object but also the appearance of the object. It is made of 3 key components which are as follows: deepsort

  1. Detection

    This is the first step in the tracking module. In this step, an object detector detects the objects in the frame that are to be tracked. These detections are then passed on to the next step.

  2. Prediction

    In this step, we use Kalman filter1 framework to predict a target bounding box of each tracking object in next frame.

  3. Data association and update

    We now have to match the target bounding box with the detected bounding box, and update track identities.

    The cost used for the first matching step is set as a combination of the Mahalanobis and the cosine distances. The Mahalanobis distance is used to incorporate motion information and the cosine distance is used to similarity between two objects. Cosine distance is a metric that helps the model recover identities in case of long-term occlusion and motion estimation also fails. Using these simple things can make the tracker even more powerful and accurate

    The second matching step will compute the intersection-over-union (IOU) distance between each detection and all predicted bounding boxes from the existing targets. The assignment is solved optimally using the Hungarian algorithm2. If the IOU of detection and target is less than a certain threshold value called IOUmin then that assignment is rejected. This technique solves the occlusion problem and helps maintain the IDs.

1 R. Kalman, “A New Approach to Linear Filtering and Prediction Problems”, Journal of Basic Engineering, vol. 82, no. Series D, pp. 35-45, 1960. :math:hookleftarrow` <#a1>`__

2 H. W. Kuhn, “The Hungarian method for the assignment problem”, Naval ResearchLogistics Quarterly, vol. 2, pp. 83-97, 1955. :math:hookleftarrow` <#a2>`__


import collections
import sys
import time

import numpy as np
import cv2
from IPython import display
import matplotlib.pyplot as plt
from openvino.runtime import Core

import notebook_utils as utils

from deepsort_utils.tracker import Tracker
from deepsort_utils.nn_matching import NearestNeighborDistanceMetric
from deepsort_utils.detection import Detection, compute_color_for_labels, xywh_to_xyxy, xywh_to_tlwh, tlwh_to_xyxy

Download the Model

We use pre-trianed models from OpenVINO’s Open Model Zoo to start our test.

Use omz_downloader, which is a command-line tool from the openvino-dev package. It automatically creates a directory structure and downloads the selected model. This step is skipped if the model is already downloaded. The selected model comes from the public directory, which means it must be converted into OpenVINO Intermediate Representation (OpenVINO IR).

NOTE: Using a model outside the list can require different pre- and post-processing.

In this case, person detection model is deployed to detect the person in each frame of the video, and reidentification model is used to ouput embedding vector to match a pair of person images by the cosine distance.

If you want to download another model (person-detection-xxx from Object Detection Models list, person-reidentification-retail-xxx from Reidentification Models list) , replace the name of the model in the code below.

# A directory where the model will be downloaded.
base_model_dir = "model"
precision = "FP16"
# The name of the model from Open Model Zoo
detection_model_name = "person-detection-0202"

download_command = f"omz_downloader " \
                   f"--name {detection_model_name} " \
                   f"--precisions {precision} " \
                   f"--output_dir {base_model_dir} " \
                   f"--cache_dir {base_model_dir}"
! $download_command

detection_model_path = f"model/intel/{detection_model_name}/{precision}/{detection_model_name}.xml"

reidentification_model_name = "person-reidentification-retail-0287"

download_command = f"omz_downloader " \
                   f"--name {reidentification_model_name} " \
                   f"--precisions {precision} " \
                   f"--output_dir {base_model_dir} " \
                   f"--cache_dir {base_model_dir}"
! $download_command

reidentification_model_path = f"model/intel/{reidentification_model_name}/{precision}/{reidentification_model_name}.xml"
################|| Downloading person-detection-0202 ||################

========== Downloading model/intel/person-detection-0202/FP16/person-detection-0202.xml

========== Downloading model/intel/person-detection-0202/FP16/person-detection-0202.bin

################|| Downloading person-reidentification-retail-0287 ||################

========== Downloading model/intel/person-reidentification-retail-0287/person-reidentification-retail-0267.onnx

========== Downloading model/intel/person-reidentification-retail-0287/FP16/person-reidentification-retail-0287.xml

========== Downloading model/intel/person-reidentification-retail-0287/FP16/person-reidentification-retail-0287.bin

Load model

Define a common class for model loading and predicting.

There are 4 main steps for OpenVINO model intialization, and they are required to run for only once before inference loop. 1. Initialize OpenVINO Runtime. 2. Read the network from *.bin and *.xml files (weights and architecture). 3. Compile the model for device. 4. Get input and output names of nodes.

In this case, we can put them all in a class constructor function.

To let OpenVINO automatically select the best device for inference just use AUTO. In most cases the best device to use is GPU (better performance, but slightly longer startup time).

ie_core = Core()

class Model:
    This class represents a OpenVINO model object.

    def __init__(self, model_path, batchsize=1, device="AUTO"):
        Initialize the model object

        model_path: path of inference model
        batchsize: batch size of input data
        device: device used to run inference
        self.model = ie_core.read_model(model=model_path)
        self.input_layer = self.model.input(0)
        self.input_shape = self.input_layer.shape
        self.height = self.input_shape[2]
        self.width = self.input_shape[3]

        for layer in self.model.inputs:
            input_shape = layer.partial_shape
            input_shape[0] = batchsize
            self.model.reshape({layer: input_shape})
        self.compiled_model = ie_core.compile_model(model=self.model, device_name=device)
        self.output_layer = self.compiled_model.output(0)

    def predict(self, input):
        Run inference

        input: array of input data
        result = self.compiled_model(input)[self.output_layer]
        return result

detector = Model(detection_model_path)
# since the number of detection object is uncertain, the input batch size of reid model should be dynamic
extractor = Model(reidentification_model_path, -1)

Data Processing

Data Processing includes data preprocess and postprocess. - Data preprocess function is used to change the layout and shape of input data according to requirement of the network input format. - Data postprocess function is used to extract the useful information from network’s original output and visualize it.

def preprocess(frame, height, width):
    Preprocess a single image

    frame: input frame
    height: height of model input data
    width: width of model input data
    resized_image = cv2.resize(frame, (width, height))
    resized_image = resized_image.transpose((2, 0, 1))
    input_image = np.expand_dims(resized_image, axis=0).astype(np.float32)
    return input_image

def batch_preprocess(img_crops, height, width):
    Preprocess batched images

    img_crops: batched input images
    height: height of model input data
    width: width of model input data
    img_batch = np.concatenate([
        preprocess(img, height, width)
        for img in img_crops
    ], axis=0)
    return img_batch

def process_results(h, w, results, thresh=0.5):
    postprocess detection results

    h, w: original height and width of input image
    results: raw detection network output
    thresh: threshold for low confidence filtering
    # The 'results' variable is a [1, 1, N, 7] tensor.
    detections = results.reshape(-1, 7)
    boxes = []
    labels = []
    scores = []
    for i, detection in enumerate(detections):
        _, label, score, xmin, ymin, xmax, ymax = detection
        # Filter detected objects.
        if score > thresh:
            # Create a box with pixels coordinates from the box with normalized coordinates [0,1].
                [(xmin + xmax) / 2 * w, (ymin + ymax) / 2 * h, (xmax - xmin) * w, (ymax - ymin) * h]

    if len(boxes) == 0:
        boxes = np.array([]).reshape(0, 4)
        scores = np.array([])
        labels = np.array([])
    return np.array(boxes), np.array(scores), np.array(labels)

def draw_boxes(img, bbox, identities=None):
    Draw bounding box in original image

    img: original image
    bbox: coordinate of bounding box
    identities: identities IDs
    for i, box in enumerate(bbox):
        x1, y1, x2, y2 = [int(i) for i in box]
        # box text and bar
        id = int(identities[i]) if identities is not None else 0
        color = compute_color_for_labels(id)
        label = '{}{:d}'.format("", id)
        t_size = cv2.getTextSize(label, cv2.FONT_HERSHEY_PLAIN, 2, 2)[0]
        cv2.rectangle(img, (x1, y1), (x2, y2), color, 3)
            img, (x1, y1), (x1 + t_size[0] + 3, y1 + t_size[1] + 4), color, -1)
            (x1, y1 + t_size[1] + 4),
            [255, 255, 255],
    return img

def cosin_metric(x1, x2):
    Calculate the consin distance of two vector

    x1, x2: input vectors
    return, x2) / (np.linalg.norm(x1) * np.linalg.norm(x2))

Test person reidentification model

The reidentification network outputs a blob with the (1, 256) shape named reid_embedding which can be compared with other descriptors using the cosine distance.

Visualize data

image1 = cv2.cvtColor(cv2.imread("../data/image/person_1_1.png"), cv2.COLOR_BGR2RGB)
image2 = cv2.cvtColor(cv2.imread("../data/image/person_1_2.png"), cv2.COLOR_BGR2RGB)
image3 = cv2.cvtColor(cv2.imread("../data/image/person_2_1.png"), cv2.COLOR_BGR2RGB)

# Define titles with images.
data = {"Person 1": image1, "Person 2": image2, "Person 3": image3}

# Create a subplot to visualize images.
fig, axs = plt.subplots(1, len(data.items()), figsize=(5, 5))

# Fill the subplot.
for ax, (name, image) in zip(axs, data.items()):

# Display an image.

Compare two person

# Metric parameters
MAX_COSINE_DISTANCE = 0.6  # threshold of matching object
input_data = [image2, image3]
img_batch = batch_preprocess(input_data, extractor.height, extractor.width)
features = extractor.predict(img_batch)
sim = cosin_metric(features[0], features[1])
if sim >= 1 - MAX_COSINE_DISTANCE:
    print(f'Same person (confidence: {sim})')
    print(f'Different person (confidence: {sim})')
Different person (confidence: 0.0272662453353405)

Main Processing Function

Run perspm tracking on the specified source. Either a webcam or a video file.

# Main processing function to run person tracking.
def run_person_tracking(source=0, flip=False, use_popup=False, skip_first_frames=0):
    Main function to run the person tracking:
    1. Create a video player to play with target fps (utils.VideoPlayer).
    2. Prepare a set of frames for person tracking.
    3. Run AI inference for person tracking.
    4. Visualize the results.

        source: The webcam number to feed the video stream with primary webcam set to "0", or the video path.
        flip: To be used by VideoPlayer function for flipping capture image.
        use_popup: False for showing encoded frames over this notebook, True for creating a popup window.
        skip_first_frames: Number of frames to skip at the beginning of the video.
    player = None
        # Create a video player to play with target fps.
        player = utils.VideoPlayer(
            source=source, flip=flip, fps=30, skip_first_frames=skip_first_frames
        # Start capturing.
        if use_popup:
            title = "Press ESC to Exit"
                winname=title, flags=cv2.WINDOW_GUI_NORMAL | cv2.WINDOW_AUTOSIZE

        processing_times = collections.deque()
        while True:
            # Grab the frame.
            frame =
            if frame is None:
                print("Source ended")
            # If the frame is larger than full HD, reduce size to improve the performance.

            # Resize the image and change dims to fit neural network input.
            h, w = frame.shape[:2]
            input_image = preprocess(frame, detector.height, detector.width)

            # Measure processing time.
            start_time = time.time()
            # Get the results.
            output = detector.predict(input_image)
            stop_time = time.time()
            processing_times.append(stop_time - start_time)
            if len(processing_times) > 200:

            _, f_width = frame.shape[:2]
            # Mean processing time [ms].
            processing_time = np.mean(processing_times) * 1000
            fps = 1000 / processing_time

            # Get poses from detection results.
            bbox_xywh, score, label = process_results(h, w, results=output)

            img_crops = []
            for box in bbox_xywh:
                x1, y1, x2, y2 = xywh_to_xyxy(box, h, w)
                img = frame[y1:y2, x1:x2]

            # Get reidentification feature of each person.
            if img_crops:
                # preprocess
                img_batch = batch_preprocess(img_crops, extractor.height, extractor.width)
                features = extractor.predict(img_batch)
                features = np.array([])

            # Wrap the detection and reidentification results together
            bbox_tlwh = xywh_to_tlwh(bbox_xywh)
            detections = [
                Detection(bbox_tlwh[i], features[i])
                for i in range(features.shape[0])

            # predict the position of tracking target

            # update tracker

            # update bbox identities
            outputs = []
            for track in tracker.tracks:
                if not track.is_confirmed() or track.time_since_update > 1:
                box = track.to_tlwh()
                x1, y1, x2, y2 = tlwh_to_xyxy(box, h, w)
                track_id = track.track_id
                outputs.append(np.array([x1, y1, x2, y2, track_id],
            if len(outputs) > 0:
                outputs = np.stack(outputs, axis=0)

            # draw box for visualization
            if len(outputs) > 0:
                bbox_tlwh = []
                bbox_xyxy = outputs[:, :4]
                identities = outputs[:, -1]
                frame = draw_boxes(frame, bbox_xyxy, identities)

                text=f"Inference time: {processing_time:.1f}ms ({fps:.1f} FPS)",
                org=(20, 40),
                fontScale=f_width / 1000,
                color=(0, 0, 255),

            if use_popup:
                cv2.imshow(winname=title, mat=frame)
                key = cv2.waitKey(1)
                # escape = 27
                if key == 27:
                # Encode numpy array to jpg.
                _, encoded_img = cv2.imencode(
                    ext=".jpg", img=frame, params=[cv2.IMWRITE_JPEG_QUALITY, 100]
                # Create an IPython image.
                i = display.Image(data=encoded_img)
                # Display the image in this notebook.

    # ctrl-c
    except KeyboardInterrupt:
    # any different error
    except RuntimeError as e:
        if player is not None:
            # Stop capturing.
        if use_popup:


Initialize tracker

Before running a new tracking task, we have to reinitialize a Tracker object

MAX_COSINE_DISTANCE = 0.6  # threshold of matching object
metric = NearestNeighborDistanceMetric(
tracker = Tracker(

Run Live Person Tracking

Use a webcam as the video input. By default, the primary webcam is set with source=0. If you have multiple webcams, each one will be assigned a consecutive number starting at 0. Set flip=True when using a front-facing camera. Some web browsers, especially Mozilla Firefox, may cause flickering. If you experience flickering, set use_popup=True.

run_person_tracking(source=0, flip=True, use_popup=False)
Cannot open camera 0
[ WARN:0@8.882] global cap_v4l.cpp:982 open VIDEOIO(V4L2:/dev/video0): can't open camera by index
[ERROR:0@8.883] global obsensor_uvc_stream_channel.cpp:156 getStreamChannelGroup Camera index out of range

Run Person Tracking on a Video File

If you do not have a webcam, you can still run this demo with a video file. Any format supported by OpenCV will work.

video_file = "../data/video/people.mp4"
run_person_tracking(source=video_file, flip=False, use_popup=False)
Source ended