yolo-v2-tf#

Use Case and High-Level Description#

YOLO v2 is a real-time object detection model implemented with Keras* from this repository and converted to TensorFlow* framework. This model was pre-trained on Common Objects in Context (COCO) dataset with 80 classes.

Conversion#

  1. Download or clone the original repository (tested on d38c3d8 commit).

  2. Use the following commands to get original model (named yolov2 in repository) and convert it to Keras* format (see details in the README.md file in the official repository):

    1. Download YOLO v2 weights:

      wget -O weights/yolov2.weights https://pjreddie.com/media/files/yolov2.weights
      
    2. Convert model weights to Keras*:

      python tools/model_converter/convert.py cfg/yolov2.cfg weights/yolov2.weights weights/yolov2.h5
      
  3. Convert model to protobuf:

    python tools/model_converter/keras_to_tensorflow.py --input_model weights/yolov2.h5 --output_model=weights/yolo-v2.pb
    

Specification#

Metric

Value

Type

Detection

GFLOPs

63.03

MParams

50.95

Source framework

Keras*

Accuracy#

Accuracy metrics obtained on Common Objects in Context (COCO) validation dataset for converted model.

Metric

Value

mAP

53.15%

COCO mAP

56.5%

Input#

Original model#

Image, name - image_input, shape - 1, 608, 608, 3, format is B, H, W, C, where:

  • B - batch size

  • H - height

  • W - width

  • C - channel

Channel order is RGB. Scale value - 255.

Converted model#

Image, name - image_input, shape - 1, 608, 608, 3, format is B, H, W, C, where:

  • B - batch size

  • H - height

  • W - width

  • C - channel

Channel order is BGR.

Output#

Original model#

The array of detection summary info, name - conv2d_22/BiasAdd, shape - 1, 19, 19, 425, format is B, Cx, Cy, N*85, where:

  • B - batch size

  • Cx, Cy - cell index

  • N - number of detection boxes for cell

Detection box has format [x, y, h, w, box_score, class_no_1, …, class_no_80], where:

  • (x, y) - raw coordinates of box center, apply sigmoid function to get coordinates relative to the cell

  • h, w - raw height and width of box, apply exponential function and multiply by corresponding anchors to get height and width values relative to the cell

  • box_score - confidence of detection box, apply sigmoid function to get confidence in [0, 1] range

  • class_no_1, …, class_no_80 - probability distribution over the classes in logits format, apply softmax function and multiply by obtained confidence value to get confidence of each class.

The model was trained on Common Objects in Context (COCO) dataset version with 80 categories of object. Mapping to class names provided in <omz_dir>/data/dataset_classes/coco_80cl.txt file. The anchor values are 0.57273,0.677385, 1.87446,2.06253, 3.33843,5.47434, 7.88282,3.52778, 9.77052,9.16828.

Converted model#

The array of detection summary info, name - conv2d_22/BiasAdd/YoloRegion, shape - 1, 153425, which could be reshaped to 1, 425, 19, 19 with format B, N*85, Cx, Cy, where:

  • B - batch size

  • N - number of detection boxes for cell

  • Cx, Cy - cell index

Detection box has format [x, y, h, w, box_score, class_no_1, …, class_no_80], where:

  • (x, y) - coordinates of box center relative to the cell

  • h, w - raw height and width of box, apply exponential function and multiply with corresponding anchors to get height and width values relative to the cell

  • box_score - confidence of detection box in [0, 1] range

  • class_no_1, …, class_no_80 - probability distribution over the classes in the [0, 1] range, multiply by confidence value to get confidence of each class

The model was trained on Common Objects in Context (COCO) dataset version with 80 categories of object. Mapping to class names provided in <omz_dir>/data/dataset_classes/coco_80cl.txt file. The anchor values are 0.57273,0.677385, 1.87446,2.06253, 3.33843,5.47434, 7.88282,3.52778, 9.77052,9.16828.

Download a Model and Convert it into OpenVINO™ IR Format#

You can download models and if necessary convert them into OpenVINO™ IR format using the Model Downloader and other automation tools as shown in the examples below.

An example of using the Model Downloader:

omz_downloader --name <model_name>

An example of using the Model Converter:

omz_converter --name <model_name>

Demo usage#

The model can be used in the following demos provided by the Open Model Zoo to show its capabilities: