NOTE: The Intel® Distribution of OpenVINO™ toolkit was formerly known as the Intel® Computer Vision SDK.
This guide applies to Ubuntu*, CentOS*, and Yocto* OSes. If you are using the Intel® Distribution of OpenVINO™ toolkit on Windows* OS, see the Installation Guide for Windows*. If you are using the Intel® Distribution of OpenVINO™ toolkit with Support for FPGA, see the Installation Guide for Linux* with Support for FPGA.
- All steps in this guide are required unless otherwise stated.
- In addition to the downloaded package, you must install dependencies and complete configuration steps.
Your installation is complete when these are all completed:
The Intel® Distribution of OpenVINO™ toolkit quickly deploys applications and solutions that emulate human vision. Based on Convolutional Neural Networks (CNN), the toolkit extends computer vision (CV) workloads across Intel® hardware, maximizing performance. The Intel® Distribution of OpenVINO™ toolkit includes the Intel® Deep Learning Deployment Toolkit (Intel® DLDT).
The Intel® Distribution of OpenVINO™ toolkit for Linux*:
The following components are installed by default:
|Model Optimizer||This tool imports, converts, and optimizes models, which were trained in popular frameworks, to a format usable by Intel tools, especially the Inference Engine.
**NOTE**: Popular frameworks include Caffe*, TensorFlow*, MXNet*, and ONNX*.
|Inference Engine||This is the engine that runs a deep learning model. It includes a set of libraries for an easy inference integration into your applications.|
|Drivers and runtimes for OpenCL™ version 2.1||Enables OpenCL on the GPU/CPU for Intel® processors.|
|Intel® Media SDK||Offers access to hardware accelerated video codecs and frame processing.|
|OpenCV*||OpenCV* community version compiled for Intel® hardware. Includes PVL libraries for computer vision.|
|OpenVX*||Intel's implementation of OpenVX* optimized for running on Intel® hardware (CPU, GPU, IPU).|
|Pre-trained models||A set of Intel's pre-trained models for learning and demo purposes or to develop deep learning software.|
|Sample Applications||A set of simple console applications demonstrating how to use the Inference Engine in your applications. For additional information about building and running the samples, refer to the Inference Engine Samples Overview.|
This guide covers the Linux* version of the Intel® Distribution of OpenVINO™ toolkit that does not include FPGA support. For the toolkit that includes FPGA support, see Installing the Intel® Distribution of OpenVINO™ toolkit for Linux* with FPGA Support.
This guide assumes you downloaded the OpenVINO toolkit for Linux* OS. If you do not have a copy of the toolkit package file, download the latest version and then return to this guide to proceed with the installation.
~/Downloadsdirectory. If not, replace
~/Downloadswith the directory where the file is located:
.tgzfile you downloaded:
install_cv_sdk_dependencies.sh. In this case, use the list of dependencies at the System Requirements online page.
The dependencies are installed. Continue to the next section to install the Intel® Distribution of OpenVINO™ toolkit core components.
NOTE: The Model Optimizer has additional prerequisites that are addressed later in this document.
If you have a previous version of the Intel® Distribution of OpenVINO™ toolkit installed, rename or delete two directories:
To install the OpenVINO toolkit core components:
Choose one of the installation options below and run the related script with root or regular user privileges. The default installation directory path depends on the privileges you choose for the installation. You can use either a GUI installation wizard or command-line instructions. The only difference between the two options is that the command-line instructions are text-based. This means that instead of clicking options in a GUI, command-line prompts ask for input on a text screen.
Use only one of these options:
If you used root privileges to run the installer, it installs the OpenVINO toolkit to
For simplicity, a symbolic link to the latest installation is also created:
If you used regular user privileges to run the installer, it installs the OpenVINO toolkit to
For simplicity, a symbolic link to the latest installation is also created:
If needed, click Customize to change the installation directory or the components you want to install:
Click Next to save the installation options and show the Installation summary screen.
The core components are installed. Continue to the next section to set environment variables.
You must update several environment variables before you can compile and run OpenVINO™ applications. Run the following script to temporarily set your environment variables:
(Optional): The OpenVINO environment variables are removed when you close the shell. As an option, you can permanently set the environment variables as follows:
[setupvars.sh] OpenVINO environment initialized.
The environment variables are set. Continue to the next section to configure the Model Optimizer.
IMPORTANT: This section is required. You must configure the Model Optimizer for at least one framework. The Model Optimizer will fail if you do not complete the steps in this section.
The Model Optimizer is a key component of the OpenVINO toolkit. You cannot do inference on your trained model without running the model through the Model Optimizer. When you run a pre-trained model through the Model Optimizer, your output is an Intermediate Representation (IR) of the network. The IR is a pair of files that describe the whole model:
.xml: Describes the network topology
.bin: Contains the weights and biases binary data
The Inference Engine reads, loads, and infers the IR files, using a common API across the CPU, GPU, or VPU hardware.
The Model Optimizer is a Python*-based command line tool (
mo.py), which is located in
Use this tool on models trained with popular deep learning frameworks such as Caffe*, TensorFlow*, MXNet*, and ONNX* to convert them to an optimized IR format that the Inference Engine can use.
This section explains how to use scripts to configure the Model Optimizer either for all of the supported frameworks at the same time or for individual frameworks. If you want to manually configure the Model Optimizer instead of using scripts, see the Using Manual Configuration Process section in the Configuring the Model Optimizer document.
For more information about the Model Optimizer, see the Model Optimizer Developer Guide.
You can either configure the Model Optimizer for all supported frameworks at once, or for one framework at a time. Choose the option that best suits your needs. If you see error messages, make sure you installed all dependencies.
NOTE: If you did not install OpenVINO to the default installation directory, replace
/intel/with the directory where you installed the software to.
Option 1: Configure the Model Optimizer for all supported frameworks at the same time:
Option 2: Configure the Model Optimizer for each framework separately:
The Model Optimizer is configured for one or more frameworks. You are ready to use two short demos to see the results of running the OpenVINO toolkit and to verify your installation was successful. The demo scripts are required since they perform additional configuration steps. Continue to the next section.
If you want to use a GPU or VPU, read through the Optional steps section.
IMPORTANT: This section is required. In addition to confirming that your installation was successful, the demo scripts perform additional steps, such as setting up your computer to use the Model Optimizer samples.
NOTE: To run the demo applications on Intel® Processor Graphics, Intel® Movidius™ Neural Compute Stick or Intel® Neural Compute Stick 2, make sure you completed the Additional Installation Steps first.
To learn more about the demo applications, see
For detailed description of the pre-trained object detection and object recognition models, go to the Overview of OpenVINO toolkit Pre-Trained Models page.
- The paths in this section assume you used the default installation directory to install the OpenVINO toolkit. If you installed the software to a directory other than
/opt/intel/, update the directory path with the location where you installed the toolkit to.
- If you installed the product as a root user, you must switch to the root mode before you continue:
The Image Classification demo uses the Model Optimizer to convert a SqueezeNet model to
.xml Intermediate Representation (IR) files. The Inference Engine component uses these files.
For a brief description of the Intermediate Representation .bin and .xml files, see Configuring the Model Optimizer.
This demo creates the directory
This demo uses
car.png in the
demo directory. When the demo completes, you will see the label and confidence for the top-10 categories:
This demo is complete. Continue to the next section to run the Inference Pipeline demo.
/opt/intel/computer_vision_sdk/deployment_tools/demo/, run the Inference Pipeline demo:
/opt/intel/computer_vision_sdk/deployment_tools/demo/to show an inference pipeline. This demo uses three pre-trained models. The demo uses vehicle recognition in which vehicle attributes build on each other to narrow in on a specific attribute. The demo works as follows:
For more information about the demo, see the Security Barrier Camera Sample.
An image viewer window that displays a picture similar to the following:
In this section, you saw a preview of the OpenVINO toolkit capabilities.
Your have completed all the required installation, configuration, and build steps to work with your trained models using CPU.
If you want to use GPU (Intel® Processor Graphics), VPU (Intel® Movidius™ Neural Compute Stick, Intel® Neural Compute Stick 2 or Intel® Vision Accelerator Design with Intel® Movidius™ VPUs), read through the next section for additional steps.
NOTE: If you are migrating from the Intel® Computer Vision SDK 2017 R3 Beta version to the Intel® Distribution of OpenVINO™ toolkit, read this information about porting your applications.
Read the Summary for your next steps.
Use these steps to prepare your computer to use Intel® Processor Graphics, Intel® Vision Accelerator Design with Intel® Movidius™ VPUs, Intel® Movidius™ Neural Compute Stick or Intel® Neural Compute Stick 2.
NOTE: These steps are only required if you want to enable the toolkit components to utilize processor graphics (GPU) on your system.
NOTE: You can use a kernel at or above 4.14.
NOTE: Two command-line suggestions display:
- Add OpenCL user to video group
- Run script to install the 4.14 kernel script
Both suggestions are incorrect. Disregard them and continue.
NOTE: These steps are only required if you want to perform inference on Intel® Movidius™ NCS powered by the Intel® Movidius™ Myriad™ 2 VPU or Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X VPU. See also the Get Started page for Intel® Neural Compute Stick 2:
Alternatively, instead of running the commands above, you can use the
install_NCS_udev_rules.sh script, which runs the same commands:
NOTE: These steps are only required if you want to perform inference on Intel® Vision Accelerator Design with Intel® Movidius™ VPUs.
For Intel® Vision Accelerator Design with Intel® Movidius™ VPUs, the following additional installation steps are required.
/etc/modprobe.d/blacklist.confcontains the line "blacklist i2c_i801" and comment it out if so:
Now, the drivers are installed.
How to solve the permission issue?
Check existence of the following udev rules:
Also please make sure that the current user is included in the users groups.
Cannot reset VPU device and cannot find any 0x20-0x27 (Raw data card with HW version Fab-B and before) I2C addresses on SMBUS (using i2c-tools)
Please contact your motherboard vendor to make sure SMBUS pins are connected to PCIe slot.
Get "Error: ipc_connection_linux_UDS : bind() failed" in hddldaemon log.
You may have run hddldaemon under another user, please run the command below and try again:
Get "I2C bus: SMBus I801 adapter at not found!" in hddldaemon log
Please run the following command to check if SMBUS I801 adapter can be found:
Get "open /dev/ion failed!" in hddldaemon log
myd_ion kernel module is installed by running the following command:
If you do not see any output from the command, please reinstall the
Constantly get "\_name\_mapping open failed err=2,No such file or directory" in hddldaemon log
Check if myd_vsc kernel module is installed by running the following command:
If you do not see any output from the command please reinstall the
Get "Required key not available" when trying to install the
Run the following commands:
In this document, you installed the Intel® Distribution of OpenVINO™ toolkit and the external dependencies. In addition, you might have installed software and drivers that will let you use GPU or VPU to infer your models.
After the software was installed, you ran two demo applications to compile the extensions library and configured the Model Optimizer for one or more frameworks.
You are now ready to learn more about converting models trained with popular deep learning frameworks to the Inference Engine format, following the links below, or you can move on to running the sample applications.
To learn more about converting models, go to: