Classification with ConvNeXt and OpenVINO#

This Jupyter notebook can be launched after a local installation only.

Github

The torchvision.models subpackage contains definitions of models for addressing different tasks, including: image classification, pixelwise semantic segmentation, object detection, instance segmentation, person keypoint detection, video classification, and optical flow. Throughout this notebook we will show how to use one of them.

The ConvNeXt model is based on the A ConvNet for the 2020s paper. The outcome of this exploration is a family of pure ConvNet models dubbed ConvNeXt. Constructed entirely from standard ConvNet modules, ConvNeXts compete favorably with Transformers in terms of accuracy and scalability, achieving 87.8% ImageNet top-1 accuracy and outperforming Swin Transformers on COCO detection and ADE20K segmentation, while maintaining the simplicity and efficiency of standard ConvNets. The torchvision.models subpackage contains several pretrained ConvNeXt model. In this tutorial we will use ConvNeXt Tiny model.

Table of contents:#

Prerequisites#

%pip install -q --extra-index-url https://download.pytorch.org/whl/cpu torch torchvision
%pip install -q  "openvino>=2023.1.0"
DEPRECATION: pytorch-lightning 1.6.5 has a non-standard dependency specifier torch>=1.8.*. pip 24.1 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of pytorch-lightning or contact the author to suggest that they release a version with a conforming dependency specifiers. Discussion can be found at pypa/pip#12063
Note: you may need to restart the kernel to use updated packages.
DEPRECATION: pytorch-lightning 1.6.5 has a non-standard dependency specifier torch>=1.8.*. pip 24.1 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of pytorch-lightning or contact the author to suggest that they release a version with a conforming dependency specifiers. Discussion can be found at pypa/pip#12063
Note: you may need to restart the kernel to use updated packages.

Get a test image#

First of all lets get a test

image from an open dataset.

import requests

from torchvision.io import read_image
import torchvision.transforms as transforms


img_path = "cats_image.jpeg"
r = requests.get("https://huggingface.co/datasets/huggingface/cats-image/resolve/main/cats_image.jpeg")

with open(img_path, "wb") as f:
    f.write(r.content)
image = read_image(img_path)
display(transforms.ToPILImage()(image))
../_images/convnext-classification-with-output_4_0.png

Get a pretrained model#

Torchvision provides a

mechanism of listing and retrieving available models.

import torchvision.models as models

# List available models
all_models = models.list_models()
# List of models by type. Classification models are in the parent module.
classification_models = models.list_models(module=models)

print(classification_models)
['alexnet', 'convnext_base', 'convnext_large', 'convnext_small', 'convnext_tiny', 'densenet121', 'densenet161', 'densenet169', 'densenet201', 'efficientnet_b0', 'efficientnet_b1', 'efficientnet_b2', 'efficientnet_b3', 'efficientnet_b4', 'efficientnet_b5', 'efficientnet_b6', 'efficientnet_b7', 'efficientnet_v2_l', 'efficientnet_v2_m', 'efficientnet_v2_s', 'googlenet', 'inception_v3', 'maxvit_t', 'mnasnet0_5', 'mnasnet0_75', 'mnasnet1_0', 'mnasnet1_3', 'mobilenet_v2', 'mobilenet_v3_large', 'mobilenet_v3_small', 'regnet_x_16gf', 'regnet_x_1_6gf', 'regnet_x_32gf', 'regnet_x_3_2gf', 'regnet_x_400mf', 'regnet_x_800mf', 'regnet_x_8gf', 'regnet_y_128gf', 'regnet_y_16gf', 'regnet_y_1_6gf', 'regnet_y_32gf', 'regnet_y_3_2gf', 'regnet_y_400mf', 'regnet_y_800mf', 'regnet_y_8gf', 'resnet101', 'resnet152', 'resnet18', 'resnet34', 'resnet50', 'resnext101_32x8d', 'resnext101_64x4d', 'resnext50_32x4d', 'shufflenet_v2_x0_5', 'shufflenet_v2_x1_0', 'shufflenet_v2_x1_5', 'shufflenet_v2_x2_0', 'squeezenet1_0', 'squeezenet1_1', 'swin_b', 'swin_s', 'swin_t', 'swin_v2_b', 'swin_v2_s', 'swin_v2_t', 'vgg11', 'vgg11_bn', 'vgg13', 'vgg13_bn', 'vgg16', 'vgg16_bn', 'vgg19', 'vgg19_bn', 'vit_b_16', 'vit_b_32', 'vit_h_14', 'vit_l_16', 'vit_l_32', 'wide_resnet101_2', 'wide_resnet50_2']

We will use convnext_tiny. To get a pretrained model just use models.get_model("convnext_tiny", weights='DEFAULT') or a specific method of torchvision.models for this model using default weights that is equivalent to ConvNeXt_Tiny_Weights.IMAGENET1K_V1. If you don’t specify weight or specify weights=None it will be a random initialization. To get all available weights for the model you can call weights_enum = models.get_model_weights("convnext_tiny"), but there is only one for this model. You can find more information how to initialize pre-trained models here.

model = models.convnext_tiny(weights=models.ConvNeXt_Tiny_Weights.DEFAULT)

Define a preprocessing and prepare an input data#

You can use

torchvision.transforms to make a preprocessing or usepreprocessing transforms from the model wight.

import torch


preprocess = models.ConvNeXt_Tiny_Weights.DEFAULT.transforms()

input_data = preprocess(image)
input_data = torch.stack([input_data], dim=0)

Use the original model to run an inference#

outputs = model(input_data)

And print results

# download class number to class label mapping
imagenet_classes_file_path = "imagenet_2012.txt"
r = requests.get(
    url="https://storage.openvinotoolkit.org/repositories/openvino_notebooks/data/data/datasets/imagenet/imagenet_2012.txt",
)

with open(imagenet_classes_file_path, "w") as f:
    f.write(r.text)

imagenet_classes = open(imagenet_classes_file_path).read().splitlines()


def print_results(outputs: torch.Tensor):
    _, predicted_class = outputs.max(1)
    predicted_probability = torch.softmax(outputs, dim=1)[0, predicted_class].item()

    print(f"Predicted Class: {predicted_class.item()}")
    print(f"Predicted Label: {imagenet_classes[predicted_class.item()]}")
    print(f"Predicted Probability: {predicted_probability}")
print_results(outputs)
Predicted Class: 281
Predicted Label: n02123045 tabby, tabby cat
Predicted Probability: 0.5808374285697937

Convert the model to OpenVINO Intermediate representation format#

OpenVINO supports PyTorch through conversion to OpenVINO Intermediate Representation (IR) format. To take the advantage of OpenVINO optimization tools and features, the model should be converted using the OpenVINO Converter tool (OVC). The openvino.convert_model function provides Python API for OVC usage. The function returns the instance of the OpenVINO Model class, which is ready for use in the Python interface. However, it can also be saved on disk using openvino.save_model for future execution.

from pathlib import Path

import openvino as ov


ov_model_xml_path = Path("models/ov_convnext_model.xml")

if not ov_model_xml_path.exists():
    ov_model_xml_path.parent.mkdir(parents=True, exist_ok=True)
    converted_model = ov.convert_model(model, example_input=torch.randn(1, 3, 224, 224))
    # add transform to OpenVINO preprocessing converting
    ov.save_model(converted_model, ov_model_xml_path)
else:
    print(f"IR model {ov_model_xml_path} already exists.")

When the openvino.save_model function is used, an OpenVINO model is serialized in the file system as two files with .xml and .bin extensions. This pair of files is called OpenVINO Intermediate Representation format (OpenVINO IR, or just IR) and useful for efficient model deployment. OpenVINO IR can be loaded into another application for inference using the openvino.Core.read_model function.

Select device from dropdown list for running inference using OpenVINO

import ipywidgets as widgets

core = ov.Core()
device = widgets.Dropdown(
    options=core.available_devices + ["AUTO"],
    value="AUTO",
    description="Device:",
    disabled=False,
)

device
Dropdown(description='Device:', index=1, options=('CPU', 'AUTO'), value='AUTO')
core = ov.Core()

compiled_model = core.compile_model(ov_model_xml_path, device_name=device.value)

Use the OpenVINO IR model to run an inference#

outputs = compiled_model(input_data)[0]
print_results(torch.from_numpy(outputs))
Predicted Class: 281
Predicted Label: n02123045 tabby, tabby cat
Predicted Probability: 0.5664422512054443