Standard Caffe* layers:
Layer Name in Caffe* | Limitations |
---|---|
Axpy | No |
BN | No |
BatchNorm | No |
Bias | No |
Concat | No |
Convolution | No |
Deconvolution | No |
DetectionOutput | No |
Dropout | Not needed for inference |
Eltwise | No |
Flatten | No |
GlobalInput | No |
InnerProduct | No |
Input | No |
LRN | No |
Permute | No |
Pooling | No |
Power | No |
ROIPooling | No |
ReLU | No |
Reshape | No |
Scale | No |
ShuffleChannel | No |
Slice | No |
Softmax | No |
Tile | No |
Standard MXNet* symbols:
Symbol Name in MXNet* | Limitations |
---|---|
Activation | supported "act_type" = "relu", "sigmoid" or "tanh" |
BatchNorm | No |
Concat | No |
Convolution | No |
Crop | "center_crop" = 1 is not supported |
Custom | Custom Layers in the Model Optimizer |
Deconvolution | No |
DeformableConvolution | No |
DeformablePSROIPooling | No |
Dropout | Not needed for inference |
ElementWiseSum | No |
Embedding | No |
Flatten | No |
FullyConnected | No |
InstanceNorm | No |
L2Normalization | only 4D input is supported |
LRN | No |
LeakyReLU | No |
Pad | No |
Pooling | No |
ROIPooling | No |
ReLU | No |
Reshape | Shape values equal to -2, -3 and -4 are not supported |
ScaleShift | No |
SoftmaxActivation | No |
SoftmaxOutput | No |
Tile | No |
UpSampling | No |
Where | No |
_Plus | No |
_contrib_MultiBoxDetection | "force_suppress" = 1 is not supported, non-default variances are not supported |
_contrib_MultiBoxPrior | No |
_contrib_Proposal | No |
_copy | Not needed for inference |
_minus_scalar | No |
_mul_scalar | No |
_arange | No |
_contrib_AdaptiveAvgPooling2D | Converted to the Average Pooling with fixed paddings |
_maximum | No |
_minimum | No |
add_n | No |
broadcast_add | No |
broadcast_mul | No |
div_scalar | No |
elementwise_sub | No |
elemwise_add | No |
elemwise_mul | No |
exp | No |
expand_dims | No |
greater_scalar | No |
minus_scalar | No |
null | Not needed for inference |
repeat | No |
rnn | No |
rnn_param_concat | No |
sigmoid | No |
slice | No |
slice_axis | No |
slice_channel | No |
slice_like | No |
stack | No |
swapaxis | No |
tile | No |
transpose | No |
zeros | No |
Some TensorFlow* operations do not match to any Inference Engine layer, but are still supported by the Model Optimizer and can be used on constant propagation path. These layers are labeled 'Constant propagation' in the table.
Standard TensorFlow* operations:
Operation Name in TensorFlow* | Limitations |
---|---|
Add | No |
AddN | No |
ArgMax | No |
AvgPool | No |
BatchToSpaceND | Supported in a pattern when converted to Convolution layer dilation attribute, Constant propagation |
BiasAdd | No |
Bucketize | CPU only |
Cast | No |
Concat | No |
ConcatV2 | No |
Const | No |
Conv2D | No |
Conv2DBackpropInput | No |
Cos | No |
Cosh | No |
CropAndResize | "method" = "bilinear" only |
DepthToSpace | No |
DepthwiseConv2dNative | No |
Enter | Supported only when it is fused to the TensorIterator layer |
Equal | No |
Exit | Supported only when it is fused to the TensorIterator layer |
Exp | No |
ExpandDims | No |
ExperimentalSparseWeightedSum | CPU only |
ExtractImagePatches | No |
Fill | No |
FusedBatchNorm | No |
Gather | No |
GatherNd | Supported if it can be replaced with Gather |
GatherV2 | No |
Greater | No |
GreaterEqual | No |
Identity | Not needed for shape inference |
LRN | No |
Less | No |
Log1p | No |
LogicalAnd | No |
LogicalOr | No |
LogicalNot | No |
LoopCond | Supported only when it is fused to the TensorIterator layer |
MatMul | No |
Max | No |
MaxPool | No |
Maximum | No |
Mean | No |
Merge | Supported only when it is fused to the TensorIterator layer |
Min | No |
Minimum | No |
MirrorPad | No |
Mul | No |
Neg | No |
NextIteration | Supported only when it is fused to the TensorIterator layer |
NonMaxSuppressionV3 | No |
NonMaxSuppressionV4 | No |
NonMaxSuppressionV5 | No |
OneHot | No |
Pack | No |
Pad | No |
PadV2 | No |
Placeholder | No |
PlaceholderWithDefault | No |
Prod | No |
Range | No |
Rank | No |
RealDiv | No |
Relu | No |
Relu6 | No |
Reshape | No |
ResizeBilinear | No |
ResizeNearestNeighbor | No |
ResourceGather | No |
ReverseSequence | No |
Round | No |
Rsqrt | No |
Shape | No |
Sigmoid | No |
Sin | No |
Sinh | No |
Size | No |
Slice | No |
Softmax | No |
SpaceToBatchND | Supported in a pattern when converted to Convolution layer dilation attribute, Constant propagation |
SparseToDense | CPU only |
Split | No |
SplitV | No |
Sqrt | No |
Square | No |
SquaredDifference | No |
Square | No |
Squeeze | The case when squeeze axis is not specified is not supported |
StopGradient | Not needed for shape inference |
StridedSlice | No |
Sub | No |
Sum | No |
Swish | No |
Switch | Control flow propagation |
Tan | No |
Tanh | No |
TensorArrayGatherV3 | Supported only when it is fused to the TensorIterator layer |
TensorArrayReadV3 | Supported only when it is fused to the TensorIterator layer |
TensorArrayScatterV3 | Supported only when it is fused to the TensorIterator layer |
TensorArraySizeV3 | Supported only when it is fused to the TensorIterator layer |
TensorArrayV3 | Supported only when it is fused to the TensorIterator layer |
TensorArrayWriteV3 | Supported only when it is fused to the TensorIterator layer |
Tile | No |
TopkV2 | No |
Transpose | No |
Unpack | No |
ZerosLike | No |
Standard Kaldi* Layers:
Symbol Name in Kaldi* | Limitations |
---|---|
addshift | No |
affinecomponent | No |
affinetransform | No |
clipgradientcomponent | Not needed for inference |
concat | No |
convolutional1dcomponent | No |
convolutionalcomponent | No |
copy | No |
Crop | No |
elementwiseproductcomponent | No |
fixedaffinecomponent | No |
linearcomponent | No |
logsoftmaxcomponent | No |
lstmnonlinearitycomponent | No |
lstmprojected | No |
lstmprojectedstreams | No |
maxpoolingcomponent | No |
naturalgradientaffinecomponent | No |
naturalgradientperelementscalecomponent | No |
noopcomponent | Not needed for inference |
normalizecomponent | No |
parallelcomponent | No |
pnormcomponent | No |
rectifiedlinearcomponent | No |
rescale | No |
sigmoid | No |
slice | No |
softmax | No |
softmaxComponent | No |
softsign | No |
splicecomponent | No |
tanhcomponent | No |
Standard ONNX* operators:
Symbol Name in ONNX* | Limitations |
---|---|
Abs | No |
Acos | No |
Add | No |
Affine | No |
ArgMax | No |
Asin | No |
Atan | No |
AveragePool | No |
BatchMatMul | No |
BatchNormalization | No |
Cast | No |
Ceil | No |
Clip | No |
Concat | No |
Constant | No |
ConstantFill | No |
ConstantOfShape | No |
Conv | No |
ConvTranspose | |
Cos | No |
Cosh | No |
Crop | No |
DetectionOutput (Intel experimental) | No |
Div | No |
Dropout | Not needed for inference |
Elu | No |
Equal | No |
Erf | No |
Expand | No |
FakeQuantize (Intel experimental) | No |
Fill | No |
Flatten | No |
Floor | No |
GRU | No |
Gather | No |
GatherTree | No |
Gemm | No |
GlobalAveragePool | No |
GlobalMaxPool | No |
Greater | No |
GreaterEqual | No |
HardSigmoid | No |
Identity | Not needed for inference |
ImageScaler | No |
LRN | No |
LSTM | Peepholes are not supported |
LeakyRelu | No |
Less | No |
LessEqual | No |
Log | No |
LogicalAnd | No |
LogicalOr | No |
MatMul | No |
MaxPool | No |
Mul | No |
Neg | No |
NonMaxSuppression | No |
Not | No |
NotEqual | No |
OneHot | No |
Pad | No |
Pow | No |
PriorBox (Intel experimental) | No |
RNN | No |
Reciprocal | No |
ReduceMax | No |
ReduceMean | No |
ReduceMin | No |
ReduceProd | No |
ReduceSum | No |
Relu | No |
Reshape | No |
Resize | Opset-10 version is supported |
Select | No |
Shape | No |
Sigmoid | No |
Sign | No |
Sin | No |
Slice | No |
Softmax | No |
SpaceToDepth | No |
Sqrt | No |
Squeeze | The case when squeeze axis is not specified is not supported |
Sub | No |
Sum | No |
Tan | No |
Tanh | No |
TopK | No |
Transpose | No |
Unsqueeze | No |
Upsample | No |
Xor | No |