Standard Caffe* layers:
Layer Name in Caffe* | Limitations |
---|---|
Axpy | No |
BN | No |
BatchNorm | No |
Bias | No |
Concat | No |
Convolution | No |
Deconvolution | No |
DetectionOutput | No |
Dropout | Not needed for inference |
Eltwise | No |
Flatten | No |
GlobalInput | No |
InnerProduct | No |
Input | No |
LRN | No |
Permute | No |
Pooling | No |
Power | No |
ROIPooling | No |
ReLU | No |
Reshape | No |
Scale | No |
ShuffleChannel | No |
Slice | No |
Softmax | No |
Tile | No |
Standard MXNet* symbols:
Symbol Name in MXNet* | Limitations |
---|---|
Activation | supported "act_type" = "relu", "sigmoid", "softrelu" or "tanh" |
BatchNorm | No |
Concat | No |
Convolution | No |
Crop | "center_crop" = 1 is not supported |
Custom | Custom Layers in the Model Optimizer |
Deconvolution | No |
DeformableConvolution | No |
DeformablePSROIPooling | No |
Dropout | Not needed for inference |
ElementWiseSum | No |
Embedding | No |
Flatten | No |
FullyConnected | No |
InstanceNorm | No |
L2Normalization | only 4D input is supported |
LRN | No |
LeakyReLU | No |
Pad | No |
Pooling | No |
ROIPooling | No |
ReLU | No |
Reshape | No |
ScaleShift | No |
SoftmaxActivation | No |
SoftmaxOutput | No |
SoftSign | No |
Tile | No |
UpSampling | No |
Where | No |
_Plus | No |
_contrib_MultiBoxDetection | "force_suppress" = 1 is not supported, non-default variances are not supported |
_contrib_MultiBoxPrior | No |
_contrib_Proposal | No |
_copy | Not needed for inference |
_minus_scalar | No |
_mul_scalar | No |
_arange | No |
_contrib_AdaptiveAvgPooling2D | Converted to the Average Pooling with fixed paddings |
_maximum | No |
_minimum | No |
add_n | No |
broadcast_add | No |
broadcast_mul | No |
cumsum | No |
div_scalar | No |
elementwise_sub | No |
elemwise_add | No |
elemwise_mul | No |
exp | No |
expand_dims | No |
greater_scalar | No |
minus_scalar | No |
null | Not needed for inference |
repeat | No |
rnn | No |
rnn_param_concat | No |
sigmoid | No |
slice | No |
slice_axis | No |
slice_channel | No |
slice_like | No |
stack | No |
swapaxis | No |
tile | No |
transpose | No |
zeros | No |
Some TensorFlow* operations do not match to any Inference Engine layer, but are still supported by the Model Optimizer and can be used on constant propagation path. These layers are labeled 'Constant propagation' in the table.
Standard TensorFlow* operations:
Operation Name in TensorFlow* | Limitations |
---|---|
Add | No |
AddN | No |
ArgMax | No |
AvgPool | No |
BatchToSpaceND | No |
BiasAdd | No |
Bucketize | CPU only |
Cast | No |
Ceil | No |
Concat | No |
ConcatV2 | No |
Const | No |
Conv2D | No |
Conv2DBackpropInput | No |
Cos | No |
Cosh | No |
CropAndResize | "method" = "bilinear" only |
CumSum | No |
DepthToSpace | No |
DepthwiseConv2dNative | No |
Enter | Supported only when it is fused to the TensorIterator layer |
Equal | No |
Exit | Supported only when it is fused to the TensorIterator layer |
Exp | No |
ExpandDims | No |
ExperimentalSparseWeightedSum | CPU only |
ExtractImagePatches | No |
Fill | No |
Floor | No |
FusedBatchNorm | No |
Gather | No |
GatherNd | Supported if it can be replaced with Gather |
GatherV2 | No |
Greater | No |
GreaterEqual | No |
Identity | Not needed for shape inference |
LRN | No |
Less | No |
Log | No |
Log1p | No |
LogicalAnd | No |
LogicalOr | No |
LogicalNot | No |
LogSoftmax | No |
LoopCond | Supported only when it is fused to the TensorIterator layer |
MatMul | No |
Max | No |
MaxPool | No |
Maximum | No |
Mean | No |
Merge | Supported only when it is fused to the TensorIterator layer |
Min | No |
Minimum | No |
MirrorPad | No |
Mul | No |
Neg | No |
NextIteration | Supported only when it is fused to the TensorIterator layer |
NonMaxSuppressionV3 | No |
NonMaxSuppressionV4 | No |
NonMaxSuppressionV5 | No |
NoOp | No |
OneHot | No |
Pack | No |
Pad | No |
PadV2 | No |
Placeholder | No |
PlaceholderWithDefault | No |
Prod | No |
Range | No |
Rank | No |
RealDiv | No |
Relu | No |
Relu6 | No |
Reshape | No |
ResizeBilinear | No |
ResizeNearestNeighbor | No |
ResourceGather | No |
ReverseSequence | No |
Round | No |
Rsqrt | No |
Shape | No |
Sigmoid | No |
Sin | No |
Sinh | No |
Size | No |
Slice | No |
Softmax | No |
Softplus | No |
Softsign | No |
SpaceToBatchND | No |
SparseToDense | CPU only |
Split | No |
SplitV | No |
Sqrt | No |
Square | No |
SquaredDifference | No |
Square | No |
Squeeze | The case when squeeze axis is not specified is not supported |
StopGradient | Not needed for shape inference |
StridedSlice | No |
Sub | No |
Sum | No |
Swish | No |
Switch | Control flow propagation |
Tan | No |
Tanh | No |
TensorArrayGatherV3 | Supported only when it is fused to the TensorIterator layer |
TensorArrayReadV3 | Supported only when it is fused to the TensorIterator layer |
TensorArrayScatterV3 | Supported only when it is fused to the TensorIterator layer |
TensorArraySizeV3 | Supported only when it is fused to the TensorIterator layer |
TensorArrayV3 | Supported only when it is fused to the TensorIterator layer |
TensorArrayWriteV3 | Supported only when it is fused to the TensorIterator layer |
Tile | No |
TopkV2 | No |
Transpose | No |
Unpack | No |
Where | No |
ZerosLike | No |
Standard Kaldi* Layers:
Symbol Name in Kaldi* | Limitations |
---|---|
addshift | No |
affinecomponent | No |
affinetransform | No |
clipgradientcomponent | Not needed for inference |
concat | No |
convolutional1dcomponent | No |
convolutionalcomponent | No |
copy | No |
Crop | No |
elementwiseproductcomponent | No |
fixedaffinecomponent | No |
linearcomponent | No |
logsoftmaxcomponent | No |
lstmnonlinearitycomponent | No |
lstmprojected | No |
lstmprojectedstreams | No |
maxpoolingcomponent | No |
naturalgradientaffinecomponent | No |
naturalgradientperelementscalecomponent | No |
noopcomponent | Not needed for inference |
normalizecomponent | No |
parallelcomponent | No |
pnormcomponent | No |
rectifiedlinearcomponent | No |
rescale | No |
sigmoid | No |
slice | No |
softmax | No |
softmaxComponent | No |
softsign | No |
splicecomponent | No |
tanhcomponent | No |
Standard ONNX* operators:
Symbol Name in ONNX* | Limitations |
---|---|
Abs | No |
Acos | No |
Add | No |
Affine | No |
ArgMax | No |
Asin | No |
Atan | No |
AveragePool | No |
BatchMatMul | No |
BatchNormalization | No |
Cast | No |
Ceil | No |
Clip | No |
Concat | No |
Constant | No |
ConstantFill | No |
ConstantOfShape | No |
Conv | No |
ConvTranspose | |
Cos | No |
Cosh | No |
Crop | No |
CumSum | No |
DequantizeLinear | Only in combination with QuantizeLinear, refer to the desc of the latter |
DetectionOutput (Intel experimental) | No |
Div | No |
Dropout | Not needed for inference |
Elu | No |
Equal | No |
Erf | No |
Expand | No |
FakeQuantize (Intel experimental) | No |
Fill | No |
Flatten | No |
Floor | No |
GRU | No |
Gather | No |
GatherTree | No |
Gemm | No |
GlobalAveragePool | No |
GlobalMaxPool | No |
Greater | No |
GreaterEqual | No |
HardSigmoid | No |
Identity | Not needed for inference |
ImageScaler | No |
LRN | No |
LSTM | Peepholes are not supported |
LeakyRelu | No |
Less | No |
LessEqual | No |
Log | No |
LogicalAnd | No |
LogicalOr | No |
LogSoftmax | No |
MatMul | No |
MaxPool | No |
MeanVarianceNormalization | Reduction over the batch dimension is not supported, reduction over all dimensions except batch and channel ones is obligatory |
Mul | No |
Neg | No |
NonMaxSuppression | No |
NonZero | No |
Not | No |
NotEqual | No |
OneHot | No |
Pad | No |
Pow | No |
PriorBox (Intel experimental) | No |
QuantizeLinear | Only in combination with DequantizeLinear. When the ops following each other in the graph and the scale and zero-point values for these operations are the same (or explicitly shared), the combination is fused into a 'FakeQuantization' |
RNN | No |
ROIAlign | No |
Range | No |
Reciprocal | No |
ReduceMax | No |
ReduceMean | No |
ReduceMin | No |
ReduceProd | No |
ReduceSum | No |
Relu | No |
Reshape | No |
Resize | Opset-10 version is supported |
ReverseSequence | No |
Scatter | Supported if fuse-able to ScatterUpdate. MYRIAD only |
ScatterElements | Supported if fuse-able to ScatterUpdate. MYRIAD only |
Select | No |
Shape | No |
Sigmoid | No |
Sign | No |
Sin | No |
Slice | No |
Softmax | No |
Softplus | No |
Softsign | No |
SpaceToDepth | No |
Sqrt | No |
Squeeze | The case when squeeze axis is not specified is not supported |
Sub | No |
Sum | No |
Tan | No |
Tanh | No |
TopK | No |
Transpose | No |
Unsqueeze | No |
Upsample | No |
Where | No |
Xor | No |