horizontal-text-detection-0001

Use Case and High-Level Description

Text detector based on FCOS architecture with MobileNetV2-like as a backbone for indoor/outdoor scenes with more or less horizontal text.

The key benefit of this model compared to the base model is its smaller size and faster performance.

Example

Specification

Metric

Value

F-measure (harmonic mean of precision and recall on ICDAR2013)

88.45%

GFlops

7.78

MParams

2.26

Source framework

PyTorch*

Inputs

Image, name: image, shape: 1, 3, 704, 704 in the format 1, C, H, W, where:

  • C - number of channels

  • H - image height

  • W - image width

Expected color order - BGR.

Outputs

  1. The boxes is a blob with the shape 100, 5 in the format N, 5, where N is the number of detected bounding boxes. For each detection, the description has the format: [x_min, y_min, x_max, y_max, conf], where:

    • (x_min, y_min) - coordinates of the top left bounding box corner

    • (x_max, y_max) - coordinates of the bottom right bounding box corner

    • conf - confidence for the predicted class

  2. The labels is a blob with the shape 100 in the format N, where N is the number of detected bounding boxes. In case of text detection, it is equal to 0 for each detected box.

Training Pipeline

The OpenVINO Training Extensions provide a training pipeline, allowing to fine-tune the model on custom dataset.

Demo usage

The model can be used in the following demos provided by the Open Model Zoo to show its capabilities: