Get Started with OpenVINO™ Toolkit on Linux*

The OpenVINO™ toolkit optimizes and runs Deep Learning Neural Network models on Intel® hardware. This guide helps you get started with the OpenVINO™ toolkit you installed on a Linux* operating system.

In this guide, you will:

OpenVINO™ toolkit Components

The toolkit consists of three primary components:

In addition, demo scripts, code samples and demo applications are provided to help you get up and running with the toolkit:

Intel® Distribution of OpenVINO™ toolkit Installation and Deployment Tools Directory Structure

This guide assumes you completed all Intel® Distribution of OpenVINO™ toolkit installation and configuration steps. If you have not yet installed and configured the toolkit, see Install Intel® Distribution of OpenVINO™ toolkit for Linux*.

By default, the installation directory is /opt/intel/openvino, but the installation gave you the option to use the directory of your choice. If you installed the Intel® Distribution of OpenVINO™ toolkit to a directory other than the default, replace /opt/intel with the directory in which you installed the software.

The primary tools for deploying your models and applications are installed to the /opt/intel/openvino/deployment_tools directory.

Click for the Intel® Distribution of OpenVINO™ toolkit directory structure

Directory         Description
demo/ Demo scripts. Demonstrate pipelines for inference scenarios, automatically perform steps and print detailed output to the console. For more information, see the Use OpenVINO: Demo Scripts section.
inference_engine/ Inference Engine directory. Contains Inference Engine API binaries and source files, samples and extensions source files, and resources like hardware drivers.
~intel_models/ Symbolic link to the intel_models subfolder of the open_model-zoo folder
      include/ Inference Engine header files. For API documentation, see the Inference Engine API Reference.
      lib/ Inference Engine binaries.
      samples/ Inference Engine samples. Contains source code for C++ and Python* samples and build scripts. See the Inference Engine Samples Overview.
      src/ Source files for CPU extensions.
model_optimizer/ Model Optimizer directory. Contains configuration scripts, scripts to run the Model Optimizer and other files. See the Model Optimizer Developer Guide.
open_model_zoo/ Open Model Zoo directory. Includes the Model Downloader tool to download pre-trained OpenVINO and public models, OpenVINO models documentation, demo applications and the Accuracy Checker tool to evaluate model accuracy.
      demos/ Demo applications for inference scenarios. Also includes documentation and build scripts.
      intel_models/ Pre-trained OpenVINO models and associated documentation. See the Overview of OpenVINO™ Toolkit Pre-Trained Models.
      tools/ Model Downloader and Accuracy Checker tools.
tools/ Contains a symbolic link to the Model Downloader folder and auxiliary tools to work with your models: Calibration tool, Benchmark and Collect Statistics tools.

OpenVINO™ Workflow Overview

The simplified OpenVINO™ workflow is:

  1. Get a trained model for your inference task. Example inference tasks: pedestrian detection, face detection, vehicle detection, license plate recognition, head pose.
  2. Run the trained model through the Model Optimizer to convert the model to an Intermediate Representation, which consists of a pair of .xml and .bin files that are used as the input for Inference Engine.
  3. Use the Inference Engine API in the application to run inference against the Intermediate Representation (optimized model) and output inference results. The application can be an OpenVINO™ sample, demo, or your own application.

Use the Demo Scripts to Learn the Workflow

The demo scripts in /opt/intel/openvino/deployment_tools/demo give you a starting point to learn the OpenVINO workflow. These scripts automatically perform the workflow steps to demonstrate running inference pipelines for different scenarios. The demo steps let you see how to:

NOTE: You must have Internet access to run the demo scripts. If your Internet access is through a proxy server, make sure the operating system environment proxy information is configured.

The demo scripts can run inference on any supported target device. Although the default inference device is CPU, you can use the -d parameter to change the inference device. The general command to run the scripts looks as follows:

./<script_name> -d [CPU, GPU, MYRIAD, HDDL]

Before running the demo applications on Intel® Processor Graphics or on an Intel® Neural Compute Stick 2 device, you must complete the Steps for Intel® Processor Graphics (GPU) or Steps for Intel® Neural Compute Stick 2.

The following paragraphs describe each demo script.

Image Classification Demo Script

The demo_squeezenet_download_convert_run script illustrates the image classification pipeline.

The script:

  1. Downloads a SqueezeNet model.
  2. Runs the Model Optimizer to convert the model to the IR.
  3. Builds the Image Classification Sample Async application.
  4. Runs the compiled sample with the car.png image located in the demo directory.

Click for an example of running the Image Classification demo script

To run the script to perform inference on a CPU:

./demo_squeezenet_download_convert_run.sh

When the script completes, you see the label and confidence for the top-10 categories:

Top 10 results:
Image /home/user/dldt/inference-engine/samples/sample_data/car.png
classid probability label
------- ----------- -----
817 0.8363345 sports car, sport car
511 0.0946488 convertible
479 0.0419131 car wheel
751 0.0091071 racer, race car, racing car
436 0.0068161 beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon
656 0.0037564 minivan
586 0.0025741 half track
717 0.0016069 pickup, pickup truck
864 0.0012027 tow truck, tow car, wrecker
581 0.0005882 grille, radiator grille
total inference time: 2.6642941
Average running time of one iteration: 2.6642941 ms
Throughput: 375.3339402 FPS
[ INFO ] Execution successful

Inference Pipeline Demo Script

The demo_security_barrier_camera uses vehicle recognition in which vehicle attributes build on each other to narrow in on a specific attribute.

The script:

  1. Downloads three pre-trained model IRs.
  2. Builds the Security Barrier Camera Demo application.
  3. Runs the application with the downloaded models and the car_1.bmp image from the demo directory to show an inference pipeline.

This application:

  1. Identifies an object identified as a vehicle.
  2. Uses the vehicle identification as input to the second model, which identifies specific vehicle attributes, including the license plate.
  3. Uses the the license plate as input to the third model, which recognizes specific characters in the license plate.

Click for an example of Running the Pipeline demo script

To run the script performing inference on Intel® Processor Graphics:

./demo_security_barrier_camera.sh -d GPU

When the verification script completes, you see an image that displays the resulting frame with detections rendered as bounding boxes, and text:

Benchmark Demo Script

The demo_benchmark_app script illustrates how to use the Benchmark Application to estimate deep learning inference performance on supported devices.

The script:

  1. Downloads a SqueezeNet model.
  2. Runs the Model Optimizer to convert the model to the IR.
  3. Builds the Inference Engine Benchmark tool.
  4. Runs the tool with the car.png image located in the demo directory.

Click for an example of running the Benchmark demo script

To run the script that performs inference on Intel® Vision Accelerator Design with Intel® Movidius™ VPUs:

./demo_squeezenet_download_convert_run.sh -d HDDL

When the verification script completes, you see the performance counters, resulting latency, and throughput values displayed on the screen.

Use Code Samples and Demo Applications to Learn the Workflow

This section guides you through a simplified workflow for the Intel® Distribution of OpenVINO™ toolkit using code samples and demo applications.

You will perform the following steps:

  1. Use the Model Downloader to download suitable models.
  2. Convert the models with the Model Optimizer.
  3. Download media files to run inference on.
  4. Run inference on the Image Classification Code Sample and see the results.
  5. Run inference on the Security Barrier Camera Demo application and see the results.

Each demo and code sample is a separate application, but they use the same behavior and components. The code samples and demo applications are:

Inputs you'll need to specify:

Build the Code Samples and Demo Applications

To perform sample inference, run the Image Classification code sample and Security Barrier Camera demo application that were automatically compiled when you ran the Image Classification and Inference Pipeline demo scripts. The binary files are in the ~/inference_engine_cpp_samples_build/intel64/Release and ~/inference_engine_demos_build/intel64/Release directories, respectively.

To run other sample code or demo applications, build them from the source files delivered as part of the OpenVINO toolkit. To learn how to build these, see the Inference Engine Code Samples Overview and the Demo Applications Overview sections.

Step 1: Download the Models

You must have a model that is specific for you inference task. Example model types are:

Options to find a model suitable for the OpenVINO™ toolkit are:

This guide uses the Model Downloader to get pre-trained models. You can use one of the following options to find a model:

Use the Model Downloader to download the models to a models directory. This guide uses <models_dir> as the models directory and <models_name> as the model name:

sudo python3 ./downloader.py --name <model_name> --output_dir <models_dir>

NOTE: Always run the downloader with sudo.

Download the following models if you want to run the Image Classification Sample and Security Barrier Camera Demo application:

Model Name Code Sample or Demo App
squeezenet1.1 Image Classification Sample
vehicle-license-plate-detection-barrier-0106 Security Barrier Camera Demo application
vehicle-attributes-recognition-barrier-0039 Security Barrier Camera Demo application
license-plate-recognition-barrier-0001 Security Barrier Camera Demo application

Click for an example of downloading the SqueezeNet Caffe* model

To download the SqueezeNet 1.1 Caffe* model to the ~/models folder:

sudo python3 ./downloader.py --name squeezenet1.1 --output_dir ~/models

Your screen looks similar to this after the download:

###############|| Downloading models ||###############
========= Downloading /home/username/models/public/squeezenet1.1/squeezenet1.1.prototxt
========= Downloading /home/username/models/public/squeezenet1.1/squeezenet1.1.caffemodel
... 100%, 4834 KB, 3157 KB/s, 1 seconds passed
###############|| Post processing ||###############
========= Replacing text in /home/username/models/public/squeezenet1.1/squeezenet1.1.prototxt =========

Click for an example of downloading models for the Security Barrier Camera Demo application

To download all three pre-trained models in FP16 precision to the ~/models folder:

./downloader.py --name vehicle-license-plate-detection-barrier-0106,vehicle-attributes-recognition-barrier-0039,license-plate-recognition-barrier-0001 --output_dir ~/models --precisions FP16

Your screen looks similar to this after the download:

################|| Downloading models ||################
========== Downloading /home/username/models/intel/vehicle-license-plate-detection-barrier-0106/FP16/vehicle-license-plate-detection-barrier-0106.xml
... 100%, 204 KB, 183949 KB/s, 0 seconds passed
========== Downloading /home/username/models/intel/vehicle-license-plate-detection-barrier-0106/FP16/vehicle-license-plate-detection-barrier-0106.bin
... 100%, 1256 KB, 3948 KB/s, 0 seconds passed
========== Downloading /home/username/models/intel/vehicle-attributes-recognition-barrier-0039/FP16/vehicle-attributes-recognition-barrier-0039.xml
... 100%, 32 KB, 133398 KB/s, 0 seconds passed
========== Downloading /home/username/models/intel/vehicle-attributes-recognition-barrier-0039/FP16/vehicle-attributes-recognition-barrier-0039.bin
... 100%, 1222 KB, 3167 KB/s, 0 seconds passed
========== Downloading /home/username/models/intel/license-plate-recognition-barrier-0001/FP16/license-plate-recognition-barrier-0001.xml
... 100%, 47 KB, 85357 KB/s, 0 seconds passed
========== Downloading /home/username/models/intel/license-plate-recognition-barrier-0001/FP16/license-plate-recognition-barrier-0001.bin
... 100%, 2378 KB, 5333 KB/s, 0 seconds passed
################|| Post-processing ||################

Step 2: Convert the Models to the Intermediate Representation

In this step, your trained models are ready to run through the Model Optimizer to convert them to the Intermediate Representation (IR) format. This is required before using the Inference Engine with the model.

Models in the Intermediate Representation format always include a pair of .xml and .bin files. Make sure you have these files for the Inference Engine to find them.

This guide uses the public SqueezeNet 1.1 Caffe* model to run the Image Classification Sample. See the example to download a model in the Download Models section to learn how to download this model.

The squeezenet1.1 model is downloaded in the Caffe* format. You must use the Model Optimizer to convert the model to the IR. The vehicle-license-plate-detection-barrier-0106, vehicle-attributes-recognition-barrier-0039, license-plate-recognition-barrier-0001 models are downloaded in the Intermediate Representation format. You don't need to use the Model Optimizer to convert these models.

  1. Create an <ir_dir> directory to contain the model's Intermediate Representation (IR).
  2. The Inference Engine can perform inference on different precision formats, such as FP32, FP16, INT8. To prepare an IR with specific precision, run the Model Optimizer with the appropriate --data_type option.
  3. Run the Model Optimizer script:
    cd /opt/intel/openvino/deployment_tools/model_optimizer
    python3 ./mo.py --input_model <model_dir>/<model_file> --data_type <model_precision> --output_dir <ir_dir>
    The produced IR files are in the <ir_dir> directory.

Click for an example of converting the SqueezeNet Caffe* model

The following command converts the public SqueezeNet 1.1 Caffe* model to the FP16 IR and saves to the ~/models/public/squeezenet1.1/ir output directory:

cd /opt/intel/openvino/deployment_tools/model_optimizer
python3 ./mo.py --input_model ~/models/public/squeezenet1.1/squeezenet1.1.caffemodel --data_type FP16 --output_dir ~/models/public/squeezenet1.1/ir

After the Model Optimizer script is completed, the produced IR files (squeezenet1.1.xml, squeezenet1.1.bin) are in the specified ~/models/public/squeezenet1.1/ir directory.

Copy the squeezenet1.1.labels file from the /opt/intel/openvino/deployment_tools/demo/ to <ir_dir>. This file contains the classes that ImageNet uses. Therefore, the inference results show text instead of classification numbers:

cp /opt/intel/openvino/deployment_tools/demo/squeezenet1.1.labels <ir_dir>

Step 3: Download a Video or a Still Photo as Media

Many sources are available from which you can download video media to use the code samples and demo applications. Possibilities include:

As an alternative, the Intel® Distribution of OpenVINO™ toolkit includes two sample images that you can use for running code samples and demo applications:

Step 4: Run the Image Classification Code Sample

NOTE: The Image Classification code sample is automatically compiled when you ran the Image Classification demo script. If you want to compile it manually, see the Inference Engine Code Samples Overview section.

To run the Image Classification code sample with an input image on the IR:

  1. Set up the OpenVINO environment variables:
    source /opt/intel/openvino/bin/setupvars.sh
  2. Go to the code samples build directory:
    cd ~/inference_engine_samples_build/intel64/Release
  3. Run the code sample executable, specifying the input media file, the IR of your model, and a target device on which you want to perform inference:
    classification_sample_async -i <path_to_media> -m <path_to_model> -d <target_device>

Click for examples of running the Image Classification code sample on different devices

The following commands run the Image Classification Code Sample using the car.png file from the /opt/intel/openvino/deployment_tools/demo/ directory as an input image, the IR of your model from ~/models/public/squeezenet1.1/ir and on different hardware devices:

CPU:

./classification_sample_async -i /opt/intel/openvino/deployment_tools/demo/car.png -m ~/models/public/squeezenet1.1/ir/squeezenet1.1.xml -d CPU

GPU:

NOTE: Running inference on Intel® Processor Graphics (GPU) requires

additional hardware configuration steps.

./classification_sample_async -i /opt/intel/openvino/deployment_tools/demo/car.png -m ~/models/public/squeezenet1.1/ir/squeezenet1.1.xml -d GPU

MYRIAD:

NOTE: Running inference on VPU devices (Intel® Movidius™ Neural Compute

Stick or Intel® Neural Compute Stick 2) with the MYRIAD plugin requires additional hardware configuration steps.

./classification_sample_async -i /opt/intel/openvino/deployment_tools/demo/car.png -m ~/models/public/squeezenet1.1/ir/squeezenet1.1.xml -d MYRIAD

When the Sample Application completes, you see the label and confidence for the top-10 categories on the display. Below is a sample output with inference results on CPU:

Top 10 results:
Image /home/user/dldt/inference-engine/samples/sample_data/car.png
classid probability label
------- ----------- -----
817 0.8363345 sports car, sport car
511 0.0946488 convertible
479 0.0419131 car wheel
751 0.0091071 racer, race car, racing car
436 0.0068161 beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon
656 0.0037564 minivan
586 0.0025741 half track
717 0.0016069 pickup, pickup truck
864 0.0012027 tow truck, tow car, wrecker
581 0.0005882 grille, radiator grille
total inference time: 2.6642941
Average running time of one iteration: 2.6642941 ms
Throughput: 375.3339402 FPS
[ INFO ] Execution successful

Step 5: Run the Security Barrier Camera Demo Application

NOTE: The Security Barrier Camera Demo Application is automatically compiled when you ran the Inference Pipeline demo scripts. If you want to build it manually, see the Demo Applications Overview section.

To run the Security Barrier Camera Demo Application using an input image on the prepared IRs:

  1. Set up the OpenVINO environment variables:
    source /opt/intel/openvino/bin/setupvars.sh
  2. Go to the demo application build directory:
    cd ~/inference_engine_demos_build/intel64/Release
  3. Run the demo executable, specifying the input media file, list of model IRs, and a target device on which to perform inference:
    ./security_barrier_camera_demo -i <path_to_media> -m <path_to_vehicle-license-plate-detection_model_xml> -m_va <path_to_vehicle_attributes_model_xml> -m_lpr <path_to_license_plate_recognition_model_xml> -d <target_device>

Click for examples of running the Security Barrier Camera demo application on different devices

CPU:

./security_barrier_camera_demo -i /opt/intel/openvino/deployment_tools/demo/car_1.bmp -m /home/username/models/intel/vehicle-license-plate-detection-barrier-0106/FP16/vehicle-license-plate-detection-barrier-0106.xml -m_va /home/username/models/intel/vehicle-attributes-recognition-barrier-0039/FP16/vehicle-attributes-recognition-barrier-0039.xml -m_lpr /home/username/models/intel/license-plate-recognition-barrier-0001/FP16/license-plate-recognition-barrier-0001.xml -d CPU

GPU:

NOTE: Running inference on Intel® Processor Graphics (GPU) requires additional hardware configuration steps.

./security_barrier_camera_demo -i /opt/intel/openvino/deployment_tools/demo/car_1.bmp -m <path_to_model>/vehicle-license-plate-detection-barrier-0106.xml -m_va <path_to_model>/vehicle-attributes-recognition-barrier-0039.xml -m_lpr <path_to_model>/license-plate-recognition-barrier-0001.xml -d GPU

MYRIAD:

NOTE: Running inference on VPU devices (Intel® Movidius™ Neural Compute

Stick or Intel® Neural Compute Stick 2) with the MYRIAD plugin requires additional hardware configuration steps.

./classification_sample_async -i <DLDT_DIR>/inference-engine/samples/sample_data/car.png -m <ir_dir>/squeezenet1.1.xml -d MYRIAD

Basic Guidelines for Using Code Samples and Demo Applications

Following are some basic guidelines for executing the OpenVINO™ workflow using the code samples and demo applications:

  1. Before using the OpenVINO™ samples, always set up the environment:
    source /opt/intel/openvino/bin/setupvars.sh
  2. Have the directory path for the following:

Typical Code Sample and Demo Application Syntax Examples

Template to call sample code or a demo application:

<path_to_app> -i <path_to_media> -m <path_to_model> -d <target_device>

With the sample information specified, the command might look like this:

./object_detection_demo_ssd_async -i ~/Videos/catshow.mp4 \
-m ~/ir/fp32/mobilenet-ssd.xml -d CPU

Advanced Demo Use

Some demo applications let you use multiple models for different purposes. In these cases, the output of the first model is usually used as the input for later models.

For example, an SSD will detect a variety of objects in a frame, then age, gender, head pose, emotion recognition and similar models target the objects classified by the SSD to perform their functions.

In these cases, the use pattern in the last part of the template above is usually:

-m_<acronym> … -d_<acronym> …

For head pose:

-m_hp <headpose model> -d_hp <headpose hardware target>

Example of an Entire Command (object_detection + head pose):

./object_detection_demo_ssd_async -i ~/Videos/catshow.mp4 \
-m ~/ir/fp32/mobilenet-ssd.xml -d CPU -m_hp headpose.xml \
-d_hp CPU

Example of an Entire Command (object_detection + head pose + age-gender):

./object_detection_demo_ssd_async -i ~/Videos/catshow.mp4 \
-m ~/r/fp32/mobilenet-ssd.xml -d CPU -m_hp headpose.xml \
-d_hp CPU -m_ag age-gender.xml -d_ag CPU

You can see all the sample application’s parameters by adding the -h or --help option at the command line.

Additional Resources

Use these resources to learn more about the OpenVINO™ toolkit: