The fast-neural-style-mosaic-onnx
model is one of the style transfer models designed to mix the content of an image with the style of another image. The model uses the method described in Perceptual Losses for Real-Time Style Transfer and Super-Resolution along with Instance Normalization. Original ONNX models are provided in the repository.
Metric | Value |
---|---|
Type | Style Transfer |
GFLOPs | 15.518 |
MParams | 1.679 |
Source framework | PyTorch* |
Accuracy metrics are obtained on MS COCO val2017 dataset. Images were resized to input size.
Metric | Original model | Converted model (FP32) | Converted model (FP16) |
---|---|---|---|
PSNR | 12.03dB | 12.03dB | 12.04dB |
Image, name - input1
, shape - 1,3,224,224
, format is B,C,H,W
where:
B
- batch sizeC
- channelH
- heightW
- widthExpected color order: RGB.
Image, name - input1
, shape - 1,3,224,224
, format is B,C,H,W
where:
B
- batch sizeC
- channelH
- heightW
- widthExpected color order: BGR.
Image, name - output1
, shape - 1,3,224,224
, format is B,C,H,W
where:
B
- batch sizeC
- channelH
- heightW
- widthExpected color order: RGB.
Image, name - output1
, shape - 1,3,224,224
, format is B,C,H,W
where:
B
- batch sizeC
- channelH
- heightW
- widthExpected color order: RGB.
The original model is distributed under the following license: