Using extgen Tool for Automatic Generation of Model Optimizer and Inference Engine Extensions

OpenVINO™ toolkit provides the extgen tool that facilitates creating Model Optimizer and Inference Engine extensions. The tool generates extension source files with stubs for the core functions. To get the workable extension, you should only add your implementation of these functions to the generated files.

Generating Extension Files

To generate extension files, run the extgen tool in one of the available modes:

To run the tool in the interactive mode, specify the following parameters:

You can use any combination of the parameters to generate Model Optimizer and/or Inference Engine extension files. For example:

python extgen.py new --mo-caffe-ext --mo-op --ie-cpu-ext

Generating Model Optimizer Extension Files

To generate Model Optimizer extension files, run the tool in the interactive mode with necessary parameters or in the silent mode with a configuration file. For example, to generate operation and extractor files for a Caffe model in the <output_directory> in the interactive mode, run the following command:

python extgen.py new --mo-op --mo-caffe-ext --output_dir <output_dir>

The extension stub files are generated in the <output_dir>/user_mo_extensions directory, which has the following structure:

Specific paths to the generated files appear on the screen. For example, for the Caffe* Proposal layer, the files are <output_dir>/user_mo_extensions/front/caffe/proposal_ext.py and <output_dir>/user_mo_extensions/ops/proposal.py.

Usually, you can use an extractor file without changes. Exceptions are the cases when you want to transform parameters from a input file in the IR. In this case, you should add these transformations to the extract method. Do not forget to add parameter names to the supported_attrs and backend_attrs methods to the operation file.

An operation file can be used without changes if your layer does not change the shape. Otherwise, you should implement the shape calculation in the <op_name>_infer method. Also, you can add default values to the __init__ method. You can find more details in the Extending Model Optimizer with New Primitives.

Generating Inference Engine Extension Files

To generate stub files for GPU and CPU Inference Engine extensions, run the tool and provide input information interactively or in the configuration file. For example, to generate an Inference Engine CPU extension files in the <output_directory> in the interactive mode:

python extgen.py new --mo-op --mo-caffe-ext --output_dir <output_dir>

The extension stub files are generated in the <output_dir>/user_ie_extensions directory.

For CPU, several files are generated in the cpu subdirectory. You must change only <output_dir>/user_ie_extensions/cpu/ext_<op_name>.cpp with adding inference implementation.

For GPU, <op_name>.cl and <op_name>.xml are generated in the gpu subdirectory. You must update both:

More details about implementing Inference Engine extensions see in Inference Engine Kernels Extensibility.

Examples of Creating a Custom Layer Extension Using extgen

This section provides step-by-step examples of extension generation for conversion Caffe* and TensorFlow* models. The Caffe* example describes the Inference Engine extension creation. The TensorFlow* example uses existing Inference Engine operation. If you need Inference Engine extension to infer a TensorFlow-based model, look at steps 6-7 in Caffe* example, because Inference Engine extension generation does not depend on the framework is it based on.

Caffe* Example

This section provides a sample for generating and implementing Model Optimizer and Inference Engine custom layer extensions for the Proposal layer of a Caffe* example model. The model (.prototxt and .caffemodel) used in the example is described in the Extending Model Optimizer with New Primitives chapter.

  1. Go to folder with extgen:
    cd <INSTALL_DIR>\deployment_tools\extension_generator\
  2. Running the extgen.py file with the following parameters to generate extension stub files:
    python extgen.py new --mo-caffe-ext --mo-op --ie-cpu-ext
    The tool asks you to provide input information to generate accurate stub files for extensions. Questions and sample answers are the following:
    a. For generating stub files for Caffe extractor file:
    Do you use this operation with Caffe Pythonic layer extractor? (y/n) y
    Please enter module name: rpn.proposal_layer
    Please enter layer name: ProposalLayer

    b. For generating a Model Optimizer operation file:
    Please enter operation name: Proposal
    Does your operation change shape? (y/n) y
    Do you want to implement shape calculation? (y/n)
    If you choose 'n' framework fallback will be used for shape calculation y

    c. For generating an Inference Engine CPU extension:
    Please enter operation name: Proposal
    Please enter all parameters in format
    <param1> <type>
    <param2> <type>
    etc
    Supported cpu types: int, bool, listint, float, listfloat, string
    When you finish please enter 'q'
    feat_stride int
    post_nms_topn int
    q
  3. Find the generated files in the ./user_mo_extensions and ./user_ie_extensions directories, which have the following structure:
    • /user_mo_extensions
      • __init__.py
      • /front
        • /caffe
          • __init__.py
          • proposallayer_ext.py
        • /mxnet
          • __init__.py
      • /ops
        • __init__.py
        • proposal.py
    • /user_ie_extensions
      • /cpu
        • CMakeLists.txt
        • ext_base.cpp
        • ext_base.hpp
        • ext_lists.cpp
        • ext_lists.hpp
        • ext_proposal.cpp
      • /gpu
  4. Implement extension functions in the generated files:

    a. Extractor proposallayer_ext.py can be used without changes.

    b. Add the shape calculation logic to the operation file proposal.py. According to IR catalog, the 'Proposal' layer shape dynamically depends on the post_nms_topn parameter.
    Add this parameter with the default value in __init__ method:

    def __init__(self, graph, attrs):
    mandatory_props = dict(
    type=__class__.op,
    op=__class__.op,
    post_nms_topn=300,
    infer=ProposalPythonOp.infer
    )
    super().__init__(graph, mandatory_props, attrs)


    Then add supported attributes in the method supported_attrs:

    def supported_attrs(self):
    # =====================================
    # List all attributes of the layer
    # all other attributes that are not in
    # the list are ignored
    # =====================================
    return [
    'feat_stride',
    'post_nms_topn'
    ]


    Now add shape calculation in infer function:

    @staticmethod
    def infer(node):
    input_shape = node.in_node(0).shape
    out_shape = np.array([0, 0], dtype=np.int64)
    # rois blob: holds R regions of interest, each is a 5 - tuple
    # (n, x1, y1, x2, y2) specifying an image batch index n and a
    # rectangle(x1, y1, x2, y2)
    out_shape[0] = input_shape[0] * node.post_nms_topn
    out_shape[1] = 5
    node.out_node(0).shape = out_shape


  5. Once you complete these steps, the Model Optimizer extension is ready to use. To run the Model Optimizer with this extension, use the command line below:
    cd ../model_optimizer
    python mo.py --input_model ZF_faster_rcnn_final.caffemodel --input_proto test.prototxt --extensions ../extension_generator/user_mo_extensions/
  6. To complete the CPU Inference Engine extension creation, add the implementation of the Proposal layer inference to the execute method in the ext_proposal.cpp file. You can find sample code for this extension in the <INSTALL_DIR>/deployment_tools/inference_engine/samples/extension/ext_proposal.cpp file. For more information about implementation of Inference Engine extensions, refer to Inference Engine Kernels Extensibility.
  7. Build a library with CPU extension to use it with the Inference Engine:

    a. Create a new build directory:

    mkdir build

    b. Go to the created build directory:

    cd ./build

    c. Set the environment variables:

    #on Linux OS:
    source <INSTALL_DIR>/bin/setupvars.sh
    #on Windows OS:
    <INSTALL_DIR>/bin/setupvars.bat

    d. Run CMake to generate the Make files:

    #on Linux OS:
    cmake ..
    #on Windows OS:
    cmake -G "<VisualStudio* version>"..

    e. Build the library:

    #on Linux OS:
    make
    #on Windows OS: use the generated Microsoft Visual Studio* project

TensorFlow* Example

This section provides an example for generating and implementing Model Optimizer extension on TensorFlow* example model.

If you already have a model with unrecognized operation, you can omit Model Preparation and go to Extension Generation chapter.

In example the Pooling layer will be used to illustrate extension generation. ModelOptimizer already supports this layer but we will remove it and show how it can be created with extgen tool. This process described in Model Preparation chapter.

Operation and Inference engine extension generation does not depend on framework and was demonstrated already in Caffe example, so here only TensorFlow extractor generation will be done.

Model Preparation

  1. Downlowad the pre-trained model ResNet-50 from TensorFlow* Model Zoo. Follow the instructions in Convert Model From TensorFlow* to prepare the model for converting.
  2. If you try to convert the ResNet-50 model, it will be converted successfully. To demonstrate extension generation, remove the existing implementation of Pooling layer from the Model Optimizer:
    cd <INSTALL_DIR>/deploymment_tools/model_optimizer
    move extensions/front/tf/pooling_ext.py extensions/front/tf/pooling_ext.py_del
  3. Run ModelOptimizer to be sure that MaxPool become unrecognized operation:
    python mo.py --input_model resnet50.pb --input_shape [1,3,224,224]
    You should see an error:
    [ ERROR ] List of operations that cannot be converted to IE IR:
    [ ERROR ] MaxPool (4)
    [ ERROR ] resnet50/pool1/MaxPool
    [ ERROR ] resnet50/block1/unit_3/bottleneck_v1/shortcut/MaxPool
    [ ERROR ] resnet50/block2/unit_4/bottleneck_v1/shortcut/MaxPool
    [ ERROR ] resnet50/block3/unit_6/bottleneck_v1/shortcut/MaxPool
    [ ERROR ] Part of the nodes was not translated to IE. Stopped.
    For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #24.

Now the sample model is ready for extension generation.

Extension Generation

  1. Go to extension generator directory:
    cd <INSTALL_DIR>/deployment_tools/extension_generator
  2. Run the extgen.py file with the following parameters to generate extension stub files :
    python extgen.py new --mo-tf-ext

    The tool asks you to provide input information to generate accurate stub files for extensions. Questions and sample answers are the following:
    a. For generating stub files for TensorFlow* extractor file:
    Please enter layer name: Pooling
    Do you want automatically parse all parameters from proto file
    (parameters will be parsed as is, without any renaming or omitting) (y/n) n
    Please enter all parameters in format
    <param1> <new name1> <type1>
    <param2> <new name2> <type2>
    etc
    where type is one of the following types:
    s - String, i - Int, f - Float, b - Bool, type - DataType, shape - TensorShapeProto,
    padding - Padding type, spatial - Get spatial from dataFormat, channel - Get channel from dataFormat,
    batch - Get batch from dataFormat, list.s - List of strings, list.i - List of ints, list.f - List of floats,
    list.b - list of bools, list.type - list of DataType, list.shape - list of TensorShapeProto,
    if your attribute type is not in list or you want implement your own attribute parsing just omit <type>
    When you finish please enter 'q'
    padding auto_pad padding
    ksize window list.i
    data_format spatial_dims spatial
    strides stride list.i
    q
    Please enter operation name to use with this extractor: MaxPool
    Please enter class with operation to use with this extractor: Pooling
    Please enter import path to class with operation: extensions.ops.pooling

  3. Find the generated files in the user_mo_extensions directory, which has the following structure:
    • /user_mo_extensions
      • __init__.py
      • /front
        • /caffe
          • __init__.py
        • /mxnet
          • __init__.py
        • /tf
          • __init__.py
          • pooling_ext.py
      • /ops
        • __init__.py
  4. Implement extension functions in the generated files:

    a. The extractor pooling_ext.py requires additional attribute conversion. Several attributes should be initialized by constants, real values will be calculated during inference. These changes are needed because we use existing operation that was written for several frameworks.

    @staticmethod
    def extract(node):
    proto_layer = node.pb
    param = proto_layer.attr
    attrs = {
    'auto_pad':convert_tf_padding_to_str(param["padding"]),
    'window':param["ksize"].list.i,
    'spatial_dims':tf_data_format_spatial(param["data_format"]),
    'stride':param["strides"].list.i,
    'op': __class__.op
    }
    attrs['window'] = np.array(attrs['window'])
    attrs['pad'] = None
    attrs['stride'] = np.array(attrs['stride'])
    attrs['pad_spatial_shape'] = None
    attrs['output_spatial_shape'] = None
    attrs['pool_method']='max'
    attrs['type'] = 'Pooling'
    attrs['exclude_pad'] = 'true'
    # update the attributes of the node
    Op.get_op_class_by_name(__class__.op).update_node_stat(node, attrs)
    return __class__.enabled
  5. Once you complete these steps, the Model Optimizer extension is ready to use. To run the Model Optimizer with this extension, use the command line below:
    cd ../model_optimizer
    python mo.py --input_model resnet50.pb --input_shape [1,3,224,224] --extensions ../extension_generator/user_mo_extensions

Conversion should finish successfully.