Install OpenVINO™ Development Tools¶
OpenVINO Development Tools is a set of utilities that make it easy to develop and optimize models and applications for OpenVINO. It provides the following tools:
Model conversion API
Benchmark Tool
Accuracy Checker and Annotation Converter
Post-Training Optimization Tool
Model Downloader and other Open Model Zoo tools
The instructions on this page show how to install OpenVINO Development Tools. If you are a Python developer, it only takes a few simple steps to install the tools with PyPI. If you are developing in C/C++, OpenVINO Runtime must be installed separately before installing OpenVINO Development Tools.
In both cases, Python 3.8 - 3.11 needs to be installed on your machine before starting.
Note
From the 2022.1 release, the OpenVINO™ Development Tools can only be installed via PyPI.
For Python Developers¶
If you are a Python developer, follow the steps in the Installing OpenVINO Development Tools section on this page to install it. Installing OpenVINO Development Tools will also install OpenVINO Runtime as a dependency, so you don’t need to install OpenVINO Runtime separately. This option is recommended for new users.
For C/C++ Developers¶
If you are a C/C++ developer, you must first install OpenVINO Runtime separately to set up the C/C++ libraries, sample code, and dependencies for building applications with OpenVINO. These files are not included with the PyPI distribution. See the Selector Tool page to install OpenVINO Runtime from an archive file for your operating system.
Once OpenVINO Runtime is installed, you may install OpenVINO Development Tools for access to tools like mo
, Model Downloader, Benchmark Tool, and other utilities that will help you optimize your model and develop your application. Follow the steps in the Installing OpenVINO Development Tools section on this page to install it.
Installing OpenVINO™ Development Tools¶
Follow these step-by-step instructions to install OpenVINO Development Tools on your computer. There are two options to install OpenVINO Development Tools: installation into an existing environment with a deep learning framework that was used for model training or creation; or installation into a new environment.
Installation into an Existing Environment with the Source Deep Learning Framework¶
To install OpenVINO Development Tools (see the Install the Package section of this article) into an existing environment with the deep learning framework used for the model training or creation, run the following command:
pip install openvino-dev
Installation in a New Environment¶
If you do not have an environment with a deep learning framework for the input model or you encounter any compatibility issues between OpenVINO and your version of deep learning framework, you may install OpenVINO Development Tools with validated versions of frameworks into a new environment.
Step 1. Set Up Python Virtual Environment¶
Create a virtual Python environment to avoid dependency conflicts. To create a virtual environment, use the following command:
python -m venv openvino_env
python3 -m venv openvino_env
Step 2. Activate Virtual Environment¶
Activate the newly created Python virtual environment by issuing this command:
openvino_env\Scripts\activate
source openvino_env/bin/activate
Important
The above command must be re-run every time a new command terminal window is opened.
Step 3. Set Up and Update PIP to the Highest Version¶
Make sure pip is installed in your environment and upgrade it to the latest version by issuing the following command:
python -m pip install --upgrade pip
Step 4. Install the Package¶
To install and configure the components of the development package together with validated versions of specific frameworks, use the commands below.
pip install openvino-dev[extras]
where the extras
parameter specifies the source deep learning framework for the input model
and is one or more of the following values separated with “,” : caffe
, kaldi
, mxnet
, onnx
, pytorch
, tensorflow
, tensorflow2
.
For example, to install and configure dependencies required for working with TensorFlow 2.x and ONNX models, use the following command:
pip install openvino-dev[tensorflow2,onnx]
Note
Model conversion API support for TensorFlow 1.x environment has been deprecated. Use the tensorflow2
parameter to install a TensorFlow 2.x environment that can convert both TensorFlow 1.x and 2.x models. If your model isn’t compatible with the TensorFlow 2.x environment, use the tensorflow parameter to install the TensorFlow 1.x environment. The TF 1.x environment is provided only for legacy compatibility reasons.
For more details on the openvino-dev PyPI package, see pypi.org .
Step 5. Test the Installation¶
To verify the package is properly installed, run the command below (this may take a few seconds):
mo -h
You will see the help message for mo
if installation finished successfully. If you get an error, refer to the Troubleshooting Guide for possible solutions.
Congratulations! You finished installing OpenVINO Development Tools with C/C++ capability. Now you can start exploring OpenVINO’s functionality through example C/C++ applications. See the “What’s Next?” section to learn more!
What’s Next?¶
Learn more about OpenVINO and use it in your own application by trying out some of these examples!
Get started with Python¶
Try the Python Quick Start Example to estimate depth in a scene using an OpenVINO monodepth model in a Jupyter Notebook inside your web browser.
Visit the Tutorials page for more Jupyter Notebooks to get you started with OpenVINO, such as:
Get started with C++¶
Try the C++ Quick Start Example for step-by-step instructions on building and running a basic image classification C++ application.
Visit the Samples page for other C++ example applications to get you started with OpenVINO, such as:
Learn OpenVINO Development Tools¶
Explore a variety of pre-trained deep learning models in the Open Model Zoo and deploy them in demo applications to see how they work.
Want to import a model from another framework and optimize its performance with OpenVINO? Visit the Convert a Model page.
Accelerate your model’s speed even further with quantization and other compression techniques using Neural Network Compression Framework (NNCF).
Benchmark your model’s inference speed with one simple command using the Benchmark Tool.
Additional Resources¶
For IoT Libraries & Code Samples, see Intel® IoT Developer Kit .