Convert MXNet Style Transfer Model

The tutorial explains how to generate a model for style transfer using the public MXNet* neural style transfer sample. To use the style transfer sample from OpenVINO, follow the steps below as no public pre-trained style transfer model is provided with the OpenVINO toolkit.

1. Download or clone the repository with an MXNet neural style transfer sample: .

2. Prepare the environment required to work with the cloned repository:

  1. Install packages dependency:

    sudo apt-get install python-tk

Installing python-tk step is needed only for Linux, as it is included by default in Python* for Windows*.

  1. Install Python* requirements:

    pip3 install --user mxnet
    pip3 install --user matplotlib
    pip3 install --user scikit-image

3. Download the pre-trained and save it to the root directory of the cloned repository because the sample expects the model file to be in that directory.

4. Modify source code files of style transfer sample from cloned repository.

  1. Go to the fast_mrf_cnn subdirectory.

    cd ./fast_mrf_cnn
  2. Open the symbol.py file and modify the decoder_symbol() function. Replace.

    def decoder_symbol():
        data = mx.sym.Variable('data')
        data = mx.sym.Convolution(data=data, num_filter=256, kernel=(3,3), pad=(1,1), stride=(1, 1), name='deco_conv1')

    with the following code:

    def decoder_symbol_with_vgg(vgg_symbol):
        data = mx.sym.Convolution(data=vgg_symbol, num_filter=256, kernel=(3,3), pad=(1,1), stride=(1, 1), name='deco_conv1')
  3. Save and close the symbol.py file.

  4. Open and edit the make_image.py file: Modify the __init__() function in the Maker class. Replace:

    decoder = symbol.decoder_symbol()

    with the following code:

    decoder = symbol.decoder_symbol_with_vgg(vgg_symbol)
  5. To join the pre-trained weights with the decoder weights, make the following changes: After the code lines for loading the decoder weights:

    args = mx.nd.load('%s_decoder_args.nd'%model_prefix)
    auxs = mx.nd.load('%s_decoder_auxs.nd'%model_prefix)

    add the following line:

    arg_dict.update(args)
  6. Use arg_dict instead of args as a parameter of the decoder.bind() function. Replace the line:

    self.deco_executor = decoder.bind(ctx=mx.gpu(), args=args, aux_states=auxs)

    with the following:

    self.deco_executor = decoder.bind(ctx=mx.cpu(), args=arg_dict, aux_states=auxs)
  7. To save the result model as a .json file, add the following code to the end of the generate() function in the Maker class:

    self.vgg_executor._symbol.save('{}-symbol.json'.format('vgg19'))
    self.deco_executor._symbol.save('{}-symbol.json'.format('nst_vgg19'))
  8. Save and close the make_image.py file.

5. Run the sample with a decoder model according to the instructions from the file in the directory of the cloned repository.

For example, to run the sample with the pre-trained decoder weights from the models folder and output shape, use the following code:

import make_image
maker = make_image.Maker('models/13', (1024, 768))
maker.generate('output.jpg', '../images/tubingen.jpg')

Where the models/13 string is composed of the following substrings:

  • models/ : path to the folder that contains .nd files with pre-trained styles weights

  • 13 : prefix pointing to 13_decoder, which is the default decoder for the repository.

Note

If you get an error saying “No module named ‘cPickle’”, try running the script from this step in Python 2. Then return to Python 3 for the remaining steps.

You can choose any style from collection of pre-trained weights. (On the Chinese-language page, click the down arrow next to a size in megabytes. Then wait for an overlay box to appear, and click the blue button in it to download.) The generate() function generates nst_vgg19-symbol.json and vgg19-symbol.json files for the specified shape. In the code, it is [1024 x 768] for a 4:3 ratio, and you can specify another, for example, [224,224] for a square ratio.

6. Run the Model Optimizer to generate an Intermediate Representation (IR):

  1. Create a new directory. For example:

    mkdir nst_model
  2. Copy the initial and generated model files to the created directory. For example, to copy the pre-trained decoder weights from the models folder to the nst_model directory, run the following commands:

    cp nst_vgg19-symbol.json nst_model
    cp vgg19-symbol.json nst_model
    cp ../vgg19.params nst_model/vgg19-0000.params
    cp models/13_decoder_args.nd nst_model
    cp models/13_decoder_auxs.nd nst_model

    Note

    Make sure that all the .params and .json files are in the same directory as the .nd files. Otherwise, the conversion process fails.

  3. Run the Model Optimizer for MXNet. Use the --nd_prefix_name option to specify the decoder prefix and --input_shape to specify input shapes in [N,C,W,H] order. For example:

    mo --input_symbol <path/to/nst_model>/nst_vgg19-symbol.json --framework mxnet --output_dir <path/to/output_dir> --input_shape [1,3,224,224] --nd_prefix_name 13_decoder --pretrained_model <path/to/nst_model>/vgg19-0000.params
  4. The IR is generated (.bin, .xml and .mapping files) in the specified output directory and ready to be consumed by the OpenVINO Runtime.