YOLO v4 is a real-time object detection model based on "YOLOv4: Optimal Speed and Accuracy of Object Detection" paper. It was implemented in Keras* framework and converted to TensorFlow* framework. For details see repository. This model was pretrained on COCO* dataset with 80 classes.
Metric | Value |
---|---|
Type | Detection |
GFLOPs | 128.608 |
MParams | 64.33 |
Source framework | Keras* |
Accuracy metrics obtained on COCO* validation dataset for converted model.
Metric | Value |
---|---|
mAP | 71.17% |
COCO* mAP (0.5) | 75.02% |
COCO* mAP (0.5:0.05:0.95) | 49.2% |
Image, name - input_1
, shape - 1,608,608,3
, format is B,H,W,C
where:
B
- batch sizeH
- heightW
- widthC
- channelChannel order is RGB
. Scale value - 255.
Image, name - input_1
, shape - 1,3,608,608
, format is B,C,H,W
where:
B
- batch sizeC
- channelH
- heightW
- widthChannel order is BGR
.
conv2d_93/BiasAdd
, shape - 1,76,76,255
. The anchor values are 12,16, 19,36, 40,28
.conv2d_101/BiasAdd
, shape - 1,38,38,255
. The anchor values are 36,75, 76,55, 72,146
.conv2d_109/BiasAdd
, shape - 1,19,19,255
. The anchor values are 142,110, 192,243, 459,401
.For each case format is B,Cx,Cy,N*85,
, where
B
- batch sizeCx
, Cy
- cell indexN
- number of detection boxes for cellDetection box has format [x
,y
,h
,w
,box_score
,class_no_1
, ..., class_no_80
], where:
x
,y
) - raw coordinates of box center, apply sigmoid function to get relative to the cell coordinatesh
,w
- raw height and width of box, apply exponential function and multiply by corresponding anchors to get absolute height and width valuesbox_score
- confidence of detection box, apply sigmoid function to get confidence in [0,1] rangeclass_no_1
,...,class_no_80
- probability distribution over the classes in logits format, apply sigmoid function and multiply by obtained confidence value to get confidence of each classconv2d_93/BiasAdd/Add
, shape - 1,76,76,255
. The anchor values are 12,16, 19,36, 40,28
.conv2d_101/BiasAdd/Add
, shape - 1,38,38,255
. The anchor values are 36,75, 76,55, 72,146
.conv2d_109/BiasAdd/Add
, shape - 1,19,19,255
. The anchor values are 142,110, 192,243, 459,401
.For each case format is B,N*85,Cx,Cy
, where
B
- batch sizeN
- number of detection boxes for cellCx
, Cy
- cell indexDetection box has format [x
,y
,h
,w
,box_score
,class_no_1
, ..., class_no_80
], where:
x
,y
) - raw coordinates of box center, apply sigmoid function to get relative to the cell coordinatesh
,w
- raw height and width of box, apply exponential function and multiply by corresponding anchors to get absolute height and width valuesbox_score
- confidence of detection box, apply sigmoid function to get confidence in [0,1] rangeclass_no_1
,...,class_no_80
- probability distribution over the classes in logits format, apply sigmoid function and multiply by obtained confidence value to get confidence of each classThe original model is distributed under the following license: