Convert & Optimize

Tutorials that explain how to optimize and quantize models with OpenVINO tools.

notebook_eye.png

BiT Image Classification OpenVINO IR model Quantization with NNCF.

127-tensorflow-bit-image-classification-nncf-quantization

Github
notebook_eye.png

Convert TensorFlow Hub models to OpenVINO Intermediate Representation (IR).

126-tensorflow-hub

GithubBinderColab
notebook_eye.png

Semantic segmentation with LRASPP MobileNet v3 and OpenVINO

125-lraspp-segmentation

GithubBinderColab
notebook_eye.png

Classification with ConvNeXt and OpenVINO.

125-convnext-classification

GithubBinderColab
notebook_eye.png

Hugging Face Model Hub with OpenVINO™.

124-hugging-face-hub

GithubBinderColab
notebook_eye.png

Convert Detectron2 Models to OpenVINO™.

123-detectron2-to-openvino

GithubBinderColab
notebook_eye.png

Convert and Optimize YOLOv8 with OpenVINO™.

122-yolov8-quantization-with-accuracy-control

Github
notebook_eye.png

Quantize Speech Recognition Models with accuracy control using NNCF PTQ API.

122-speech-recognition-quantization-wav2vec2

Github
notebook_eye.png

Learn about OpenVINO™ model conversion API.

121-legacy-mo-convert-to-openvino

Github
notebook_eye.png

Learn about model conversion in OpenVINO™.

121-convert-to-openvino

GithubBinderColab
notebook_eye.png

Convert TensorFlow Object Detection models to OpenVINO IR.

120-tensorflow-object-detection-to-openvino

GithubBinderColab
notebook_eye.png

Convert TensorFlow Lite models to OpenVINO IR.

119-tflite-to-openvino

GithubColab
notebook_eye.png

Improve performance of image preprocessing step.

118-optimize-preprocessing

GithubColab
notebook_eye.png

Improve performance of sparse Transformer models.

117-model-server

Github
notebook_eye.png

Improve performance of sparse Transformer models.

116-sparsity-optimization

GithubColab
notebook_eye.png

Use asynchronous execution to improve data pipelining.

115-async-api

GithubBinderColab
notebook_eye.png

Quantize MobileNet image classification.

113-image-classification-quantization

GithubBinderColab
notebook_eye.png

Use Neural Network Compression Framework (NNCF) to quantize PyTorch model in post-training mode (without model fine-tuning).

112-pytorch-post-training-quantization-nncf

Github
notebook_eye.png

Migrate YOLOv5 POT API based quantization pipeline on Neural Network Compression Framework (NNCF).

111-yolov5-quantization-migration

notebook_eye.png

Quantize a kidney segmentation model and show live inference.

110-ct-segmentation-quantize-nncf

Github
notebook_eye.png

Live inference of a kidney segmentation model and benchmark CT-scan data with OpenVINO.

110-ct-scan-live-inference

GithubBinder
notebook_eye.png

Performance tricks for throughput mode in OpenVINO™.

109-throughput-tricks

Github
notebook_eye.png

Performance tricks for latency mode in OpenVINO™.

109-latency-tricks

Github
notebook_eye.png

Working with GPUs in OpenVINO™

108-gpu-device

Github
notebook_eye.png

Optimize and quantize a pre-trained Data2Vec speech model.

107-speech-recognition-quantization-data2vec

GithubColab
notebook_eye.png

Optimize and quantize a pre-trained Wav2Vec2 speech model.

107-speech-recognition-quantization-wav2vec2

Github
notebook_eye.png

Demonstrates how to use AUTO Device.

106-auto-device

GithubBinderColab
notebook_eye.png

Optimize and quantize a pre-trained BERT model.

105-language-quantize-bert

GithubColab
notebook_eye.png

Download, convert and benchmark models from Open Model Zoo.

104-model-tools

GithubBinderColab
103-paddle-to-openvino-classification.png

Convert PaddlePaddle models to OpenVINO IR.

103-paddle-onnx-to-openvino

notebook_eye.png

Convert PyTorch models to OpenVINO IR.

102-pytorch-to-openvino

GithubColab
notebook_eye.png

Convert PyTorch models to OpenVINO IR.

102-pytorch-onnx-to-openvino

Github
101-tensorflow-classification-to-openvino.png

Convert TensorFlow models to OpenVINO IR.

101-tensorflow-classification-to-openvino

GithubBinderColab