# yolo-v4-tiny-tf¶

## Use Case and High-Level Description¶

YOLO v4 Tiny is a real-time object detection model based on “YOLOv4: Optimal Speed and Accuracy of Object Detection” paper. It was implemented in Keras* framework and converted to TensorFlow* framework. For details see repository. This model was pre-trained on Common Objects in Context (COCO) dataset with 80 classes.

Metric

Value

Type

Detection

GFLOPs

6.9289

MParams

6.0535

Source framework

Keras*

## Accuracy¶

Accuracy metrics obtained on Common Objects in Context (COCO) validation dataset for converted model.

Metric

Value

mAP

0.403

COCO mAP (0.5)

0.463

COCO mAP (0.5:0.05:0.95)

0.226

## Input¶

### Original model¶

Image, name - input_1, shape - 1, 416, 416, 3, format is B, H, W, C, where:

• B - batch size

• H - height

• W - width

• C - channel

Channel order is RGB. Scale value - 255.

### Converted model¶

Image, name - input_1, shape - 1, 3, 416, 416, format is B, C, H, W, where:

• B - batch size

• C - channel

• H - height

• W - width

Channel order is BGR.

## Output¶

### Original model¶

1. The array of detection summary info, name - conv2d_20/BiasAdd, shape - 1, 26, 26, 255. The anchor values are 23,27, 37,58, 81,82.

2. The array of detection summary info, name - conv2d_17/BiasAdd, shape - 1, 13, 13, 255. The anchor values are 81,82, 135,169, 344,319.

For each case format is B, Cx, Cy, N*85,, where:

• B - batch size

• Cx, Cy - cell index

• N - number of detection boxes for cell

Detection box has format [x, y, h, w, box_score, class_no_1, …, class_no_80], where:

• (x, y) - raw coordinates of box center, apply sigmoid function to get relative to the cell coordinates

• h, w - raw height and width of box, apply exponential function and multiply by corresponding anchors to get absolute height and width values

• box_score - confidence of detection box, apply sigmoid function to get confidence in [0, 1] range

• class_no_1, …, class_no_80 - probability distribution over the classes in logits format, apply sigmoid function and multiply by obtained confidence value to get confidence of each class

The model was trained on Common Objects in Context (COCO) dataset version with 80 categories of object. Mapping to class names provided in <omz_dir>/data/dataset_classes/coco_80cl.txt file.

### Converted model¶

1. The array of detection summary info, name - conv2d_20/BiasAdd/Add, shape - 1, 26, 26, 255. The anchor values are 23,27, 37,58, 81,82.

2. The array of detection summary info, name - conv2d_17/BiasAdd/Add, shape - 1, 13, 13, 255. The anchor values are 81,82, 135,169, 344,319.

For each case format is B, N*85, Cx, Cy, where:

• B - batch size

• N - number of detection boxes for cell

• Cx, Cy - cell index

Detection box has format [x, y, h, w, box_score, class_no_1, …, class_no_80], where:

• (x, y) - raw coordinates of box center, apply sigmoid function to get relative to the cell coordinates

• h, w - raw height and width of box, apply exponential function and multiply by corresponding anchors to get absolute height and width values

• box_score - confidence of detection box, apply sigmoid function to get confidence in [0, 1] range

• class_no_1, …, class_no_80 - probability distribution over the classes in logits format, apply sigmoid function and multiply by obtained confidence value to get confidence of each class

The model was trained on Common Objects in Context (COCO) dataset version with 80 categories of object. Mapping to class names provided in <omz_dir>/data/dataset_classes/coco_80cl.txt file.

python3 <omz_dir>/tools/downloader/downloader.py --name <model_name>
python3 <omz_dir>/tools/downloader/converter.py --name <model_name>