colorization-siggraph

Use Case and High-Level Description

The colorization-siggraph model is one of the colorization group of models designed to real-time user-guided image colorization. Model was trained on ImageNet dataset with synthetically generated user interaction. For details about this family of models, check out the repository.

Model consumes as input L-channel of LAB-image (also user points and binary mask as optional inputs). Model give as output predict A- and B-channels of LAB-image.

Specification

Metric

Value

Type

Colorization

GFLOPs

150.5441

MParams

34.0511

Source framework

PyTorch*

Accuracy

The accuracy metrics were calculated between generated images by model and real validation images from ImageNet dataset. Results are obtained on subset of 2000 images.

Metric

Value

PSNR

27.73dB

SSIM

0.92

Also, metrics can be calculated using VGG16 caffe model and colorization as preprocessing. The results below are obtained on the validation images from ImageNet dataset.

For preprocessing rgb -> gray -> colorization received values:

Metric

Value with preprocessing

Value without preprocessing

Accuracy top-1

58.25%

70.96%

Accuracy top-5

81.78%

89.88%

Input

  1. Image, name - data_l, shape - 1, 1, 256, 256, format is B, C, H, W, where:

    • B - batch size

    • C - channel

    • H - height

    • W - width

    L-channel of LAB-image.

  2. Image, name - user_ab, shape - 1, 2, 256, 256, format is B, C, H, W, where:

    • B - batch size

    • C - channel

    • H - height

    • W - width

    Channel order is AB channels of LAB-image. Input for user points.

  3. Mask, name - user_map, shape - 1, 1, 256, 256, format is B, C, H, W, where:

    • B - batch size

    • C - number of flags for pixel

    • H - height

    • W - width

    This input is a binary mask indicating which points are provided by the user. The mask differentiates unspecified points from user-specified gray points with (a,b) = 0. If point(pixel) was specified the flag will be equal to 1.

Note

You don’t need to specify all 3 inputs to use the model. If you don’t want to use local user hints (user points), you can use only data_l input. In this case, the remaining inputs (user_ab and user_map) must be filled with zeros.

Output

Image, name - color_ab, shape - 1, 2, 256, 256, format is B, C, H, W, where:

  • B - batch size

  • C - channel

  • H - height

  • W - width

Channel order is AB channels of LAB-image.

Download a Model and Convert it into OpenVINO™ IR Format

You can download models and if necessary convert them into OpenVINO™ IR format using the Model Downloader and other automation tools as shown in the examples below.

An example of using the Model Downloader:

omz_downloader --name <model_name>

An example of using the Model Converter:

omz_converter --name <model_name>

Demo usage

The model can be used in the following demos provided by the Open Model Zoo to show its capabilities: