OpenVINO™ Ecosystem Overview

OpenVINO™ is not just one tool. It is an expansive ecosystem of utilities, providing a comprehensive workflow for deep learning solution development. Learn more about each of them to reach the full potential of OpenVINO™ Toolkit.

Neural Network Compression Framework (NNCF)

A suite of advanced algorithms for Neural Network inference optimization with minimal accuracy drop. NNCF applies quantization, filter pruning, binarization and sparsity algorithms to PyTorch and TensorFlow models during training.

More resources:

OpenVINO™ Training Extensions

A convenient environment to train Deep Learning models and convert them using the OpenVINO™ toolkit for optimized inference.

More resources:

OpenVINO™ Security Add-on

A solution for Model Developers and Independent Software Vendors to use secure packaging and secure model execution.

More resources:

Dataset Management Framework (Datumaro)

A framework and CLI tool to build, transform, and analyze datasets.

More resources:

Compile Tool

Compile tool is now deprecated. If you need to compile a model for inference on a specific device, use the following script:


from openvino.runtime import Core

ov.Core().compile_model(device, modelPath, properties).export_model(compiled_blob)

ov::Core core;

std::stringstream stream;

ov::CompiledModel model = core.compile_model("modelPath", "deviceName");

model.export_model(stream);

To learn which device supports the import / export functionality, see the feature support matrix.

For more details on preprocessing steps, refer to the Optimize Preprocessing. To compile the model with advanced preprocessing capabilities, refer to the Use Case - Integrate and Save Preprocessing Steps Into OpenVINO IR, which shows how to have all the preprocessing in the compiled blob.

DL Workbench

A web-based tool for deploying deep learning models. Built on the core of OpenVINO and equipped with a graphics user interface, DL Workbench is a great way to explore the possibilities of the OpenVINO workflow, import, analyze, optimize, and build your pre-trained models. You can do all that by visiting Intel® Developer Cloud and launching DL Workbench online.

OpenVINO™ integration with TensorFlow (OVTF)

OpenVINO™ Integration with TensorFlow will no longer be supported as of OpenVINO release 2023.0. As part of the 2023.0 release, OpenVINO will feature a significantly enhanced TensorFlow user experience within native OpenVINO without needing offline model conversions. Learn more.