If your question is not covered by the topics below, use the OpenVINO™ Support page, where you can participate on a free forum.
Internally, the Model Optimizer uses a protobuf library to parse and load Caffe* models. This library requires a file grammar and a generated parser. For a Caffe fallback, the Model Optimizer uses a Caffe-generated parser for a Caffe-specific .proto
file (which is usually located in the src/caffe/proto
directory). So, if you have Caffe installed on your machine with Python* interface available, make sure that this is exactly the version of Caffe that was used to create the model.
If you just want to experiment with the Model Optimizer and test a Python extension for working with your custom layers without building Caffe, add the layer description to the caffe.proto
file and generate a parser for it.
For example, to add the description of the CustomReshape
layer, which is an artificial layer not present in any caffe.proto
files:
caffe.proto
file: PATH_TO_CUSTOM_CAFFE
is the path to the root directory of custom Caffe*.However, because your model has custom layers, you must register your custom layers as custom. To learn more about it, refer to the section Custom Layers in Model Optimizer.
You need the Caffe* Python* interface. In this case, do the following:
Most likely, the Model Optimizer does not know how to infer output shapes of some layers in the given topology. To lessen the scope, compile the list of layers that are custom for the Model Optimizer: present in the topology, absent in list of supported layers for the target framework. Then refer to available options in the corresponding section in Custom Layers in Model Optimizer.
Your model input shapes must be smaller than or equal to the shapes of the mean image file you provide. The idea behind the mean file is to subtract its values from the input image in an element-wise manner. When the mean file is smaller than the input image, there are not enough values to perform element-wise subtraction. Also, make sure that you use the mean file that was used during the network training phase. Note that the mean file is dataset dependent.
Most likely, the mean file that you have is specified with --mean_file
flag, while launching the Model Optimizer is empty. Make sure that this is exactly the required mean file and try to regenerate it from the given dataset if possible.
The mean file that you provide for the Model Optimizer must be in a .binaryproto
format. You can try to check the content using recommendations from the BVLC Caffe* (#290).
The structure of any Caffe* topology is described in the caffe.proto
file of any Caffe version. For example, in the Model Optimizer, you can find the following proto file, used by default: <INSTALL_DIR>/deployment_tools/model_optimizer/mo/front/caffe/proto/my_caffe.proto
. There you can find the structure:
This means that any topology should contain layers as top-level structures in prototxt
. For example, see the LeNet topology.
The structure of any Caffe* topology is described in the caffe.proto
file for any Caffe version. For example, in the Model Optimizer you can find the following .proto
file, used by default: <INSTALL_DIR>/deployment_tools/model_optimizer/mo/front/caffe/proto/my_caffe.proto
. There you can find the structure:
So, the input layer of the provided model must be specified in one of the following styles:
However, if your model contains more than one input, the Model Optimizer is able to convert the model with inputs specified in a form of 1, 2, 3 of the list above. The last form is not supported for multi-input topologies.
Model Optimizer does not support mean file processing for topologies with more than one input. In this case, you need to perform preprocessing of the inputs for a generated Intermediate Representation in the Inference Engine to perform subtraction for every input of your multi-input model.
There are multiple reasons why the Model Optimizer does not accept the mean file. See FAQs #4, #5, and #6.
There are multiple reasons why the Model Optimizer does not accept a Caffe* topology. See FAQs #7 and #20.
Model Optimizer tried to infer a specified layer via the Caffe* framework, however it cannot construct a net using the Caffe Python* interface. Make sure that your caffemodel
and prototxt
files are correct. To prove that the problem is not in the prototxt
file, see FAQ #2.
Model Optimizer tried to infer a custom layer via the Caffe* framework, however an error occurred, meaning that the model could not be inferred using the Caffe. It might happen if you try to convert the model with some noise weights and biases resulting in problems with layers with dynamic shapes. You should write your own extension for every custom layer you topology might have. For more details, refer to Extending Model Optimizer with New Primitives.
Your model contains a custom layer and you have correctly registered it with the CustomLayersMapping.xml
file. These steps are required to offload shape inference of the custom layer with the help of the system Caffe*. However, the Model Optimizer could not import a Caffe package. Make sure that you have built Caffe with a pycaffe
target and added it into the PYTHONPATH
environment variable. For more information, please refer to the Configuring the Model Optimizer. At the same time, it is highly recommend to avoid dependency on Caffe and write your own Model Optimizer extension for your custom layer. For more information, refer to the FAQ #45.
You have run the Model Optimizer without a flag --framework caffe|tf|mxnet
. Model Optimizer tries to deduce the framework by the input model file extension (.pb
for TensorFlow*, .caffemodel
for Caffe*, .params
for MXNet*). Your input model might have a different extension and you need to explicitly set the source framework. For example, use --framework caffe
.
Input shape was not provided. That is mandatory for converting an MXNet* model to the Intermediate Representation, because MXNet models do not contain information about input shapes. Please, use the --input_shape
flag to specify it. For more information about using the --input_shape
, refer to the FAQ #57.
--mean_file
and --mean_values
are two ways of specifying preprocessing for the input. However, they cannot be used together, as it would mean double subtraction and lead to ambiguity. Choose one of these options and pass it using the corresponding CLI option.
You might have specified negative values with --mean_file_offsets
. Only positive integer values in format '(x,y)' must be used.
--scale
sets a scaling factor for all channels. --scale_values
sets a scaling factor per each channel. Using both of them simultaneously produces ambiguity, so you must use only one of them. For more information, refer to the Using Framework-Agnostic Conversion Parameters: for Converting a Caffe* Model, Converting a TensorFlow* Model, Converting an MXNet* Model.
Model Optimizer cannot find a .prototxt
file for a specified model. By default, it must be located in the same directory as the input model with the same name (except extension). If any of these conditions is not satisfied, use --input_proto
to specify the path to the .prototxt
file.
Model Optimizer cannot create a directory specified via --output_dir
. Make sure that you have enough permissions to create the specified directory.
One of the layers in the specified topology might not have inputs or values. Please make sure that the provided caffemodel
and protobuf
files are correct.
Some of the layers are not supported by the Inference Engine and cannot be translated to an Intermediate Representation. You can extend the Model Optimizer by allowing generation of new types of layers and implement these layers in the dedicated Inference Engine plugins. For more information, refer to Extending the Model Optimizer with New Primitives page and Inference Engine Extensibility Mechanism
Model Optimizer cannot build a graph based on a specified model. Most likely, it is incorrect.
You might have specified an output node via the --output
flag that does not exist in a provided model. Make sure that the specified output is correct and this node exists in the current model.
Most likely, the Model Optimizer tried to cut the model by a specified input. However, other inputs are needed.
You might have specified a placeholder node with an input node, while the placeholder node does not have it the model.
This error occurs when an incorrect input port is specified with the --input
command line argument. When using --input
, you can optionally specify an input port in the form: X:node_name
, where X
is an integer index of the input port starting from 0 and node_name
is the name of a node in the model. This error occurs when the specified input port X
is not in the range 0..(n-1), where n is the number of input ports for the node. Please, specify a correct port index, or do not use it if it is not needed.
This error occurs when an incorrect combination of the --input
and --input_shape
command line options is used. Using both --input
and --input_shape
is valid only if --input
points to the Placeholder
node, a node with one input port or --input
has the form PORT:NODE
, where PORT
is an integer port index of input for node NODE
. Otherwise, the combination of --input
and --input_shape
is incorrect.
When using the PORT:NODE
notation for the --input
command line argument and PORT
> 0, you should specify --input_shape
for this input. This is a limitation of the current Model Optimizer implementation.
Looks like you have provided only one shape for the placeholder, however there are no or multiple inputs in the model. Please, make sure that you have provided correct data for placeholder nodes.
This error occurs when the SubgraphMatch.single_input_node
function is used for an input port that supplies more than one node in a sub-graph. The single_input_node
function can be used only for ports that has a single consumer inside the matching sub-graph. When multiple nodes are connected to the port, use the input_nodes
function or node_by_pattern
function instead of single_input_node
. Please, refer to Sub-Graph Replacement in the Model Optimizer for more details.
This error occurs when the SubgraphMatch._add_output_node
function is called manually from user's extension code. This is an internal function, and you should not call it directly.
While using configuration file to implement a TensorFlow* front replacement extension, an incorrect match kind was used. Only points
or scope
match kinds are supported. Please, refer to Sub-Graph Replacement in the Model Optimizer for more details.
Model Optimizer tried to write an event file in the specified directory but failed to do that. That could happen because the specified directory does not exist or you do not have enough permissions to write in it.
Most likely, you tried to extend Model Optimizer with a new primitive, but did not specify an infer function. For more information on extensions, see Extending the Model Optimizer with New Primitives.
Model Optimizer cannot infer shapes or values for the specified node. It can happen because of a bug in the custom shape infer function, because the node inputs have incorrect values/shapes, or because the input shapes are incorrect.
Batch dimension is the first dimension in the shape and it should be equal to 1 or undefined. In your case, it is not equal to either 1 or undefined, which is why the -b
shortcut produces undefined and unspecified behavior. To resolve the issue, specify full shapes for each input with the --input_shape
option. Run Model Optimizer with the --help
option to learn more about the notation for input shapes.
Most likely, the shape is not defined (partially or fully) for the specified node. You can use --input_shape
with positive integers to override model input shapes.
This error occurs when the --input
command line option is used to cut a model and --input_shape
is not used to override shapes for a node and a shape for the node cannot be inferred by Model Optimizer. You need to help Model Optimizer and specify shapes with --input_shape
for each node that is specified with the --input
command line option.
To convert TensorFlow* models with Model Optimizer, TensorFlow 1.2 or newer must be installed. For more information on prerequisites, see Configuring the Model Optimizer.
The model file should contain a frozen TensorFlow* graph in the text or binary format. Make sure that --input_model_is_text
is provided for a model in the text format. By default, a model is interpreted as binary file.
Most likely, there is a problem with the specified file for model. The file exists, but it has bad formatting or is corrupted.
This means that the layer {layer_name}
is not supported in the Model Optimizer. You can find a list of all unsupported layers in the corresponding section. You should add this layer to CustomLayersMapping.xml
(Legacy Mode for Caffe* Custom Layers) or implement the extensions for this layer (Extending Model Optimizer with New Primitives).
Path to the custom replacement configuration file was provided with the --transformations_config
flag, but the file could not be found. Please, make sure that the specified path is correct and the file exists.
When extending Model Optimizer with new primitives keep in mind that their names are case insensitive. Most likely, another operation with the same name is already defined. For more information, see Extending the Model Optimizer with New Primitives.
Model Optimizer can not load an MXNet* model in the specified file format. Please, use the .json
or .param
format.
There are models where Placeholder
has the UINT8 type and the first operation after it is 'Cast', which casts the input to FP32. Model Optimizer detected that the Placeholder
has the UINT8 type, but the next operation is not 'Cast' to float. Model Optimizer does not support such a case. Please, change the model to have placeholder FP32 data type.
Model Optimizer cannot convert the model to the specified data type. Currently, FP16 and FP32 are supported. Please, specify the data type with the --data_type
flag. The available values are: FP16, FP32, half, float.
Model Optimizer tried to access a node that does not exist. This could happen if you have incorrectly specified placeholder, input or output node name.
To convert MXNet* models with Model Optimizer, MXNet 1.0.0 must be installed. For more information about prerequisites, see Configuring the Model Optimizer.
Most likely, there is a problem with loading of the MXNet* model. Please, make sure that the specified path is correct, the model exists, it is not corrupted, and you have sufficient permissions to work with it.
Please, make sure that inputs are defined and have correct shapes. You can use --input_shape
with positive integers to override model input shapes.
When extending Model Optimizer with new primitives keep in mind that their names are case insensitive. Most likely, another operation with the same name is already defined. For more information, see Extending the Model Optimizer with New Primitives .
You cannot specify the batch and the input shape at the same time. You should specify a desired batch as the first value of the input shape.
The specified input shape cannot be parsed. Please, define it in one of the following ways:
Keep in mind that there is no space between and inside the brackets for input shapes.
When specifying input shapes for several layers, you must provide names for inputs, whose shapes will be overwritten. For usage examples, see Converting a Caffe* Model. Additional information for --input_shape
is in FAQ #57.
Mean values for the given parameter cannot be parsed. It should be a string with a list of mean values. For example, in '(1,2,3)', 1 stands for the RED channel, 2 for the GREEN channel, 3 for the BLUE channel.
The number of channels and the number of given values for mean values do not match. The shape should be defined as '(R,G,B)' or '[R,G,B]'. The shape should not contain undefined dimensions (? or -1). The order of values is as follows: (value for a RED channel, value for a GREEN channel, value for a BLUE channel).
Most likely, you have not specified inputs using --mean_values
. Please, specify inputs with the --input
flag. For usage examples, please, refer to FAQ #63.
Most likely, you have not specified inputs using --scale_values
. Please, specify inputs with the --input
flag. For usage examples, please, refer to FAQ #64.
The number of specified mean values and the number of inputs must be equal. Please, refer to Converting a Caffe* Model for a usage example.
The number of specified scale values and the number of inputs must be equal. Please, refer to Converting a Caffe* Model for a usage example.
A replacement defined in the configuration file for sub-graph replacement using node names patterns or start/end nodes has the match_kind
attribute. The attribute may have only one of the values: scope
or points
. If a different value is provided, this error is displayed.
A replacement defined in the configuration file for sub-graph replacement using node names patterns or start/end nodes has the instances
attribute. This attribute is mandatory, and it causes this error if it is missing. Refer to documentation with a description of the sub-graph replacement feature.
A replacement defined in the configuration file for sub-graph replacement using start/end nodes has the instances
attribute. For this type of replacement, the instance must be defined with a dictionary with two keys start_points
and end_points
. Values for these keys are lists with the start and end node names, respectively. Refer to documentation with a description of the sub-graph replacement feature.
A replacement for the specified id is not defined in the configuration file. Please, refer to FAQ #66 for more information.
Path to a custom replacement configuration file was provided with the --transformations_config
flag, but it cannot be found. Please, make sure that the specified path is correct and the file exists.
The file for custom replacement configuration provided with the --transformations_config
flag cannot be parsed. In particular, it should have a valid JSON structure. For more details, refer to JSON Schema Reference.
Every custom replacement should declare a set of mandatory attributes and their values. For more details, refer to FAQ #72.
The file for custom replacement configuration provided with the --transformations_config
flag cannot pass validation. Make sure that you have specified id
, instances
and match_kind
for all the patterns.
The custom replacement configuration file provided with the --tensorflow_custom_operations_config_update
cannot be parsed. Please, make sure that the file is correct and refer to FAQs #69, #70, #71, and #72.
This error occurs when you try to make a sub-graph match. It is detected that between the start and end nodes that were specified as inputs/outputs of the subgraph to find, there are nodes that are marked as outputs but there is no path from them to the input nodes. Make sure that the subgraph you want to match does actually contain all the specified output nodes.
Start or end node for the sub-graph replacement using start/end nodes is specified incorrectly. Model Optimizer finds internal nodes of the sub-graph strictly "between" the start and end nodes. Then it adds all input nodes to the sub-graph (and inputs of their inputs and so on) for these "internal" nodes. The error reports, that the Model Optimizer reached input node during this phase. This means that the start/end points are specified incorrectly in the configuration file. Refer to documentation with a description of the sub-graph replacement feature.
This message may appear when the --data_type=FP16
command line option is used. This option implies conversion of all the blobs in the node to FP16. If a value in a blob is out of the range of valid FP16 values, the value is converted to positive or negative infinity. It may lead to incorrect results of inference or may not be a problem, depending on the model. The number of such elements and the total number of elements in the blob is printed out together with the name of the node, where this blob is used.
This message may appear when the --data_type=FP16
command line option is used. This option implies conversion of all blobs in the mode to FP16. If a value in the blob is so close to zero that it cannot be represented as a valid FP16 value, it is converted to a true zero FP16 value. Depending on the model, it may lead to incorrect results of inference or may not be a problem. The number of such elements and the total number of elements in the blob are printed out together with a name of the node, where this blob is used.
This error occurs when the SubgraphMatch.node_by_pattern
function is used with a pattern that does not uniquely identify a single node in a sub-graph. Try to extend the pattern string to make unambiguous match to a single sub-graph node. For more details, refer to Sub-graph Replacement in the Model Optimizer.
Your Caffe* topology .prototxt
file is intended for training. Model Optimizer expects a deployment-ready .prototxt
file. To fix the problem, prepare a deployment-ready .prototxt
file. Usually, preparation of a deploy-ready topology results in removing data
layer(s), adding input
layer(s), and removing loss layer(s).
You are using an unsupported Python* version. Use only versions 3.4 - 3.6 for the C++ protobuf
implementation that is supplied with the OpenVINO Toolkit. You can still boost conversion speed by building protobuf library from sources. For complete instructions about building protobuf
from sources, see the appropriate section in Converting a Model to Intermediate Representation.
This error occurs if you do not provide --nd_prefix_name
, --pretrained_model_name
and --input_symbol
parameters. Model Optimizer requires both .params
and .nd
model files to merge into the result file (.params
). Topology description (.json
file) should be prepared (merged) in advance and provided with --input_symbol
parameter.
If you add to your model additional layers and weights that are in .nd
files, the Model Optimizer can build a model from one .params
file and two additional .nd
files (*_args.nd
, *_auxs.nd
). To do that, provide both CLI options or do not pass them if you want to convert an MXNet model without additional weights. For more information, refer to Converting a MXNet* Model.
In case when the model has multiple inputs and you want to provide mean/scale values, you need to pass those values for each input. More specifically, a number of passed values should be the same as the number of inputs of the model. For more information, refer to Converting a Model to Intermediate Representation.
When you passed the mean/scale values and specify names of input layers of the model, you might have used the name that does not correspond to any input layer. Make sure that by passing values with --input
option, you list only names of the input layers of your model. For more information, refer to the Converting a Model to Intermediate Representation.
Most likely, .json
file does not exist or has a name that does not match the notation of MXNet. Make sure that the file exists and it has a correct name. For more information, refer to Converting a MXNet\* Model.
Model Optimizer for MXNet supports only .params
and .nd
files formats. Most likely, you specified some unsupported file format in --input_model
. For more information, refer to Converting a MXNet* Model.
Model Optimizer tried to load the model that contains some unsupported operations. If you want to convert model that contains unsupported operations you need to prepare extension for all such operations. For more information, refer to Extending Model Optimizer with New Primitives.
This error appears if the class of implementation of op for Python Caffe layer could not be used by Model Optimizer. Python layers should be handled differently compared to ordinary Caffe layers.
In particular, you need to call the function register_caffe_python_extractor
and pass name
as the second argument of the function. The name should be the compilation of the layer name and the module name separated by a dot.
For example, your topology contains this layer with type Python
:
What you do first is implementing an extension for this layer in the Model Optimizer as an ancestor of Op
class.
It is mandatory to call two functions right after the implementation of that class:
Note that the first call register_caffe_python_extractor(ProposalPythonExampleOp, 'rpn.proposal_layer.ProposalLayer')
registers extension of the layer in the Model Optimizer that will be found by the specific name (mandatory to join module name and layer name): rpn.proposal_layer.ProposalLayer
.
The second call prevents Model Optimizer from using this extension as if it is an extension for a layer with type Proposal
. Otherwise, this layer can be chosen as an implementation of extension that can lead to potential issues. For more information, refer to the Extending Model Optimizer with New Primitives.
Model Optimizer supports only Memory
layers, in which input_memory
goes before ScaleShift
or FullyConnected
layer. This error message means that in your model the layer after input memory is not of type ScaleShift
or FullyConnected
. This is a known limitation.
These error messages mean that the Model Optimizer does not support your Kaldi* model, because check sum of the model is not 16896 (the model should start with this number) or model file does not contain tag <Net>
as a starting one. Double check that you provide a path to a true Kaldi model and try again.
These messages mean that you passed the file counts containing not one line. The count file should start with [
and end with ]
, and integer values should be separated by space between those signs.
There are multiple reasons why the Model Optimizer does not accept a Kaldi topology: file is not available or does not exist. Refer to FAQ #89.
There are multiple reasons why the Model Optimizer does not accept a counts file: file is not available or does not exist. Also refer to FAQ #90.
This message means that if you have model with custom layers and its json file has been generated with MXNet version lower than 1.0.0, Model Optimizer does not support such topologies. If you want to convert it you have to rebuld MXNet with unsupported layers or generate new json with MXNet version 1.0.0 and higher. Also you need to implement Inference Engine extension for used custom layers. For more information, refer to the appropriate section of Model Optimizer configuration.
Model Optimizer supports only straightforward models without cycles.
There are multiple ways to avoid cycles:
For Tensorflow:
For all frameworks:
or
This message means that model is not supported. It may be caused by using shapes larger than 4-D. There are two ways to avoid such message:
This error messages mean that Model Optimizer does not support your Kaldi model, because the Net contains ParallelComponent
that does not end by tag </ParallelComponent>
. Double check that you provide a path to a true Kaldi model and try again.
There are many flavors of Caffe framework, and most layers in them are implemented identically. But there are exceptions. For example, output value of layer Interp is calculated differently in Deeplab-Caffe and classic Caffe. So if your model contain layer Interp and converting of your model has failed, please modify the 'interp_infer' function in the file extensions/ops/interp.op according to the comments of the file.
It means that your mean/scale values have wrong format. Specify mean/scale values using the form layer_name(val1,val2,val3)
. You need to specify values for each input of the model. For more information, refer to Converting a Model to Intermediate Representation.
It means that you trying to convert the topology which contains '_contrib_box_nms' operation which is not supported directly. However the sub-graph of operations including the '_contrib_box_nms' could be replaced with DetectionOutput layer if your topology is one of the gluoncv topologies. Specify '–enable_ssd_gluoncv' command line parameter for the Model Optimizer to enable this transformation.