Performance Information Frequently Asked Questions

The following questions and answers are related to performance benchmarks published on the Performance Information documentation site.

1. How often do performance benchmarks get updated?

New performance benchmarks are typically published on every major.minor release of the Intel® Distribution of OpenVINO™ toolkit.

2. Where can I find the models used in the performance benchmarks?

All of the models used are included in the toolkit's Open Model Zoo GitHub repository.

3. Will there be new models added to the list used for benchmarking?

The models used in the performance benchmarks were chosen based on general adoption and usage in deployment scenarios. We're continuing to add new models that support a diverse set of workloads and usage.

4. What does CF or TF in the graphs stand for?

CF means Caffe*, while TF means TensorFlow*.

5. How can I run the benchmark results on my own?

All of the performance benchmarks were generated using the open-sourced tool within the Intel® Distribution of OpenVINO™ toolkit called benchmark_app, which is available in both C++ and Python.

6. What image sizes are used for the classification network models?

The image size used in the inference depends on the network being benchmarked. The following table shows the list of input sizes for each network model.

Model Public Network Task Input Size (Height x Width)
faster_rcnn_resnet50_coco-TF Faster RCNN Tf object detection 600x1024
googlenet-v1-CF GoogLeNet_ILSVRC-2012_Caffe classification 224x224
googlenet-v3-TF Inception v3 Tf classification 299x299
mobilenet-ssd-CF SSD (MobileNet)_COCO-2017_Caffe object detection 300x300
mobilenet-v2-1.0-224-TF MobileNet v2 Tf classification 224x224
mobilenet-v2-CF Mobilenet V2 Caffe classification 224x224
resnet-101-CF ResNet-101_ILSVRC-2012_Caffe classification 224x224
resnet-50-CF ResNet-50_v1_ILSVRC-2012_Caffe classification 224x224
se-resnext-50-CF Se-ResNext-50_ILSVRC-2012_Caffe classification 224x224
squeezenet1.1-CF SqueezeNet_v1.1_ILSVRC-2012_Caffe classification 227x227
ssd300-CF SSD (VGG-16)_VOC-2007_Caffe object detection 300x300

7. Where can I purchase the specific hardware used in the benchmarking?

Intel partners with various vendors all over the world. Visit the Intel® AI: In Production Partners & Solutions Catalog for a list of Equipment Makers and the Supported Devices documentation. You can also remotely test and run models before purchasing any hardware by using Intel® DevCloud for the Edge.

8. How can I optimize my models for better performance or accuracy?

We published a set of guidelines and recommendations to optimize your models available in an introductory guide and an advanced guide. For further support, please join the conversation in the Community Forum.

9. Why are INT8 optimized models used for benchmarking on CPUs with no VNNI support?

The benefit of low-precision optimization using the OpenVINO™ toolkit model optimizer extends beyond processors supporting VNNI through Intel® DL Boost. The reduced bit width of INT8 compared to FP32 allows Intel® CPU to process the data faster and thus offers better throughput on any converted model agnostic of the intrinsically supported low-precision optimizations within Intel® hardware. Please refer to INT8 vs. FP32 Comparison on Select Networks and Platforms for comparison on boost factors for different network models and a selection of Intel® CPU architectures, including AVX-2 with Intel® Core™ i7-8700T, and AVX-512 (VNNI) with Intel® Xeon® 5218T and Intel® Xeon® 8270.

10. Previous releases included benchmarks on googlenet-v1. Why is there no longer benchmarks on this neural network model?

We replaced googlenet-v1 to resnet-18-pytorch due to changes in developer usage. The public model resnet-18 is used by many developers as an Image Classification model. This pre-optimized model was also trained on the ImageNet database, similar to googlenet-v1. Both googlenet-v1 and resnet-18 will remain part of the Open Model Zoo. Developers are encouraged to utilize resnet-18-pytorch for Image Classification use cases.

For more complete information about performance and benchmark results, visit: www.intel.com/benchmarks and Optimization Notice. Legal Information.