Instance Segmentation Python* Demo

This demo shows how to run Instance Segmentation models from Detectron or maskrcnn-benchmark using OpenVINO™. These models should be obtained through OpenVINO™ Training Extensions (OTE).

NOTE: Only batch size of 1 is supported.

How It Works

The demo application expects an instance segmentation model in the Intermediate Representation (IR) format with the following constraints:

  • Two inputs: im_data for input image and im_info for meta-information about the image (actual height, width and scale).
  • At least four outputs including:
    • boxes with absolute bounding box coordinates of the input image
    • scores with confidence scores for all bounding boxes
    • classes with object class IDs for all bounding boxes
    • raw_masks with fixed-size segmentation heat maps for all classes of all bounding boxes

As input, the demo application takes:

  • a path to a single image file, a video file or a numeric ID of a web camera specified with a command-line argument -i

The demo workflow is the following:

  1. The demo application reads image/video frames one by one, resizes them to fit into the input image blob of the network (im_data).
  2. The im_info input blob passes resulting resolution and scale of a pre-processed image to the network to perform inference.
  3. The demo visualizes the resulting instance segmentation masks. Certain command-line options affect the visualization:
    • If you specify --show_boxes and --show_scores arguments, bounding boxes and confidence scores are also shown.
    • By default, tracking is used to show object instance with the same color throughout the whole video. It assumes more or less static scene with instances in two frames being a part of the same track if intersection over union of the masks is greater than the 0.5 threshold. To disable tracking, specify the --no_track argument.

NOTE: By default, Open Model Zoo demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the demo application or reconvert your model using the Model Optimizer tool with --reverse_input_channels argument specified. For more information about the argument, refer to When to Reverse Input Channels section of Converting a Model Using General Conversion Parameters.

Running

Run the application with the -h option to see the following usage message:

usage: instance_segmentation_demo.py [-h] -m "<path>" --labels "<path>" -i
"<path>" [-d "<device>"]
[-l "<absolute_path>"] [--delay "<num>"]
[-pt "<num>"] [--no_keep_aspect_ratio]
[--no_track] [--show_scores]
[--show_boxes] [-pc] [-r] [--no_show]
[-u UTILIZATION_MONITORS]
Options:
-h, --help Show this help message and exit.
-m "<path>", --model "<path>"
Required. Path to an .xml file with a trained model.
--labels "<path>" Required. Path to a text file with class labels.
-i "<path>" Required. Path to an image, video file or a numeric
camera ID.
-d "<device>", --device "<device>"
Optional. Specify the target device to infer on: CPU,
GPU, FPGA, HDDL or MYRIAD. The demo will look for a
suitable plugin for device specified (by default, it
is CPU).
-l "<absolute_path>", --cpu_extension "<absolute_path>"
Required for CPU custom layers. Absolute path to a
shared library with the kernels implementation.
--delay "<num>" Optional. Interval in milliseconds of waiting for a
key to be pressed.
-pt "<num>", --prob_threshold "<num>"
Optional. Probability threshold for detections
filtering.
--no_keep_aspect_ratio
Optional. Force image resize not to keep aspect ratio.
--no_track Optional. Disable tracking.
--show_scores Optional. Show detection scores.
--show_boxes Optional. Show bounding boxes.
-pc, --perf_counts Optional. Report performance counters.
-r, --raw_output_message
Optional. Output inference results raw values.
--no_show Optional. Don't show output
-u UTILIZATION_MONITORS, --utilization_monitors UTILIZATION_MONITORS
Optional. List of monitors to show initially.

Running the application with an empty list of options yields the short version of the usage message and an error message.

To run the demo, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO Model Downloader or go to https://download.01.org/opencv/.

NOTE: Before running the demo with a trained model, make sure the model is converted to the Inference Engine format (*.xml + *.bin) using the Model Optimizer tool.

To run the demo, please provide paths to the model in the IR format, to a file with class labels, and to an input video, image, or folder with images:

python3 instance_segmentation_demo/instance_segmentation_demo.py \
-m <path_to_model>/instance-segmentation-security-0050.xml \
--label instance_segmentation_demo/coco_labels.txt \
--no_keep_aspect_ratio \
-i 0 \
--delay 1

Demo Output

The application uses OpenCV to display resulting instance segmentation masks and current inference performance.

See Also