The colorization-siggraph
model is one of the colorization group of models designed to real-time user-guided image colorization. Model was trained on ImageNet dataset with synthetically generated user interaction. For details about this family of models, check out the repository.
Model consumes as input L-channel of LAB-image (also user points and binary mask as optional inputs). Model give as output predict A- and B-channels of LAB-image.
Metric | Value |
---|---|
Type | Colorization |
GFLOPs | 150.5441 |
MParams | 34.0511 |
Source framework | PyTorch* |
The accuracy metrics were calculated between generated images by model and real validation images from ImageNet dataset. Results are obtained on subset of 2000 images.
Metric | Value |
---|---|
PSNR | 27.73dB |
SSIM | 0.92 |
Also, metrics can be calculated using VGG16 caffe model and colorization as preprocessing. The results below are obtained on the validation images from ImageNet dataset.
For preprocessing rgb -> gray -> colorization
received values:
Metric | Value with preprocessing | Value without preprocessing |
---|---|---|
Accuracy top-1 | 58.25% | 70.96% |
Accuracy top-5 | 81.78% | 89.88% |
Image, name - data_l
, shape - 1,1,256,256
, format is B,C,H,W
where:
B
- batch sizeC
- channelH
- heightW
- widthL-channel of LAB-image.
Image, name - user_ab
, shape - 1,2,256,256
, format is B,C,H,W
where:
B
- batch sizeC
- channelH
- heightW
- widthChannel order is AB channels of LAB-image. Input for user points.
Mask, name - user_map
, shape - 1,1,256,256
, format is B,C,H,W
where:
B
- batch sizeC
- number of flags for pixelH
- heightW
- widthThis input is a binary mask indicating which points are provided by the user. The mask differentiates unspecified points from user-specified gray points with (a,b) = 0. If point(pixel) was specified the flag will be equal to 1.
NOTE: You don't need to specify all 3 inputs to use the model. If you don't want to use local user hints (user points), you can use only
data_l
input. In this case, the remaining inputs (user_ab
anduser_map
) must be filled with zeros.
Image, name - color_ab
, shape - 1,2,256,256
, format is B,C,H,W
where:
B
- batch sizeC
- channelH
- heightW
- widthChannel order is AB channels of LAB-image.
You can download models and if necessary convert them into Inference Engine format using the Model Downloader and other automation tools as shown in the examples below.
An example of using the Model Downloader:
An example of using the Model Converter:
The original model is distributed under the following license: