Legacy Features and Components#

Since OpenVINO has grown very rapidly in recent years, a number of its features and components have been replaced by other solutions. Some of them are still supported to assure OpenVINO users are given enough time to adjust their projects, before the features are fully discontinued.

This section will give you an overview of these major changes and tell you how you can proceed to get the best experience and results with the current OpenVINO offering.

OpenVINO Development Tools Package
New solution: OpenVINO Runtime includes all supported components
Old solution: discontinuation planned for OpenVINO 2025.0

OpenVINO Development Tools used to be the OpenVINO package with tools for advanced operations on models, such as Model conversion API, Benchmark Tool, Accuracy Checker, Annotation Converter, Post-Training Optimization Tool, and Open Model Zoo tools. Most of these tools have been either removed, replaced by other solutions, or moved to the OpenVINO Runtime package.
Model Optimizer / Conversion API
New solution: Direct model support and OpenVINO Converter (OVC)
Old solution: Legacy Conversion API discontinuation planned for OpenVINO 2025.0

The role of Model Optimizer and later the Conversion API was largely reduced when all major model frameworks became supported directly. For converting model files explicitly, it has been replaced with a more light-weight and efficient solution, the OpenVINO Converter (launched with OpenVINO 2023.1).
Open Model ZOO
New solution: users are encouraged to use public model repositories
Old solution: discontinuation planned for OpenVINO 2025.0

Open Model ZOO provided a collection of models prepared for use with OpenVINO, and a small set of tools enabling a level of automation for the process. Since the tools have been mostly replaced by other solutions and several other model repositories have recently grown in size and popularity, Open Model ZOO will no longer be maintained. You may still use its resources until they are fully removed.
As for public model databases, Hugging Face has become the recommended model source for OpenVINO.
Multi-Device Execution
New solution: Automatic Device Selection
Old solution: Legacy Multi-Device Execution discontinuation planned for OpenVINO 2025.0

The behavior and results of the Multi-Device Execution mode are covered by the CUMULATIVE_THROUGHPUT option of the Automatic Device Selection. The only difference is that CUMULATIVE_THROUGHPUT uses the devices specified by AUTO, which means that adding devices manually is not mandatory, while with MULTI, the devices had to be specified before the inference.

Discontinued:#

Apache MXNet, Caffe, and Kaldi model formats
New solution: conversion to ONNX via external tools
Old solution: model support discontinued with OpenVINO 2024.0
Post-training Optimization Tool (POT)
New solution: Neural Network Compression Framework (NNCF) now offers the same functionality
Old solution: POT discontinued with OpenVINO 2024.0
Inference API 1.0
New solution: API 2.0 launched in OpenVINO 2022.1
Old solution: discontinued with OpenVINO 2024.0
Compile tool
New solution: the tool is no longer needed
Old solution: discontinued with OpenVINO 2023.0
If you need to compile a model for inference on a specific device, use the following script:

import openvino as ov

core = ov.Core()

compiled_model = core.compile_model(model_path, device, properties)
output_stream = compiled_model.export_model()
ov::Core core;
ov::CompiledModel model = core.compile_model("modelPath", "deviceName");
std::fstream stream("compiled_model.blob");
model.export_model(stream);
TensorFlow integration (OVTF)
New solution: Direct model support and OpenVINO Converter (OVC)
Old solution: discontinued in OpenVINO 2023.0

OpenVINO now features a native TensorFlow support, with no need for explicit model conversion.