# Converting an MXNet* Model¶

A summary of the steps for optimizing and deploying a model that was trained with the MXNet* framework:

1. Configure the Model Optimizer for MXNet* (MXNet was used to train your model)

2. Convert a MXNet model to produce an optimized Intermediate Representation (IR) of the model based on the trained network topology, weights, and biases values

3. Test the model in the Intermediate Representation format using the Inference Engine in the target environment via provided Inference Engine sample applications

4. Integrate the Inference Engine in your application to deploy the model in the target environment

## Supported Topologies¶

Note

SSD models from the table require converting to the deploy mode. For details, see the Conversion Instructions in the GitHub MXNet-SSD repository.

Model Name

Model File

VGG-16

VGG-19

ResNet-152 v1

SqueezeNet_v1.1

Inception BN

CaffeNet

DenseNet-121

DenseNet-161

DenseNet-169

DenseNet-201

MobileNet

SSD-ResNet-50

SSD-VGG-16-300

SSD-Inception v3

FCN8 (Semantic Segmentation)

MTCNN part 1 (Face Detection)

MTCNN part 2 (Face Detection)

MTCNN part 3 (Face Detection)

MTCNN part 4 (Face Detection)

Lightened_moon

RNN-Transducer

Repo

word_lm

Repo

Other supported topologies

## Convert an MXNet* Model¶

To convert an MXNet* model, run Model Optimizer with a path to the input model .params file and to an output directory where you have write permissions:

cd <INSTALL_DIR>/deployment_tools/model_optimizer/
python3 mo.py --input_model model-file-0000.params --output_dir <OUTPUT_MODEL_DIR>

mo --input_model model-file-0000.params --output_dir <OUTPUT_MODEL_DIR>


Two groups of parameters are available to convert your model:

### Using MXNet*-Specific Conversion Parameters¶

The following list provides the MXNet*-specific parameters.

MXNet-specific parameters:
--input_symbol <SYMBOL_FILE_NAME>
Symbol file (for example, "model-symbol.json") that contains a topology structure and layer attributes
--nd_prefix_name <ND_PREFIX_NAME>
Prefix name for args.nd and argx.nd files
--pretrained_model_name <PRETRAINED_MODEL_NAME>
Name of a pre-trained MXNet model without extension and epoch
number. This model will be merged with args.nd and argx.nd
files
--save_params_from_nd
Enable saving built parameters file from .nd files
--legacy_mxnet_model
Enable MXNet loader to make a model compatible with the latest MXNet version.
Use only if your model was trained with MXNet version lower than 1.0.0
--enable_ssd_gluoncv
Enable transformation for converting the gluoncv ssd topologies.
Use only if your topology is one of ssd gluoncv topologies

Note

By default, the Model Optimizer does not use the MXNet loader, as it transforms the topology to another format, which is compatible with the latest version of MXNet, but it is required for models trained with lower version of MXNet. If your model was trained with MXNet version lower than 1.0.0, specify the --legacy_mxnet_model key to enable the MXNet loader. However, the loader does not support models with custom layers. In this case, you must manually recompile MXNet with custom layers and install it to your environment.

## Custom Layer Definition¶

Internally, when you run the Model Optimizer, it loads the model, goes through the topology, and tries to find each layer type in a list of known layers. Custom layers are layers that are not included in the list of known layers. If your topology contains any layers that are not in this list of known layers, the Model Optimizer classifies them as custom.

## Supported MXNet* Layers¶

Refer to Supported Framework Layers for the list of supported standard layers.