Get Started with OpenVINO™ Toolkit via Deep Learning Workbench

The OpenVINO™ toolkit optimizes and runs Deep Learning Neural Network models on Intel® hardware. This guide helps you get started with the OpenVINO™ toolkit via the Deep Learning Workbench (DL Workbench) on Linux*, Windows*, or macOS*.

In this guide, you will:

  • Learn the OpenVINO™ inference workflow.
  • Start DL Workbench on Linux. Links to instructions for other operating systems are provided as well.
  • Create a project and run a baseline inference.

DL Workbench is a web-based graphical environment that enables you to easily use various sophisticated OpenVINO™ toolkit components:

DL Workbench supports the following scenarios:

  1. Calibrate the model in INT8 precision
  2. Find the best combination of inference parameters: number of streams and batches
  3. Analyze inference results and compare them across different configurations
  4. Implement an optimal configuration into your application

Prerequisites

Prerequisite Linux* Windows* macOS*
Operating system Ubuntu* 18.04. Other Linux distributions, such as Ubuntu* 16.04 and CentOS* 7, are not validated. Windows* 10 macOS* 10.15 Catalina
CPU Intel® Core™ i5 Intel® Core™ i5 Intel® Core™ i5
GPU Intel® Pentium® processor N4200/5 with Intel® HD Graphics Not supported Not supported
HDDL, Myriad Intel® Neural Compute Stick 2
Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
Not supported Not supported
Available RAM space 4 GB 4 GB 4 GB
Available storage space 8 GB + space for imported artifacts 8 GB + space for imported artifacts 8 GB + space for imported artifacts
Docker* Docker CE 18.06.1 Docker Desktop 2.1.0.1 Docker CE 18.06.1
Web browser Google Chrome* 76
Browsers like Mozilla Firefox* 71 or Apple Safari* 12 are not validated.
Microsoft Internet Explorer* is not supported.
Google Chrome* 76
Browsers like Mozilla Firefox* 71 or Apple Safari* 12 are not validated.
Microsoft Internet Explorer* is not supported.
Google Chrome* 76
Browsers like Mozilla Firefox* 71 or Apple Safari* 12 are not validated.
Microsoft Internet Explorer* is not supported.
Resolution 1440 x 890 1440 x 890 1440 x 890
Internet Optional Optional Optional
Installation method From Docker Hub
From OpenVINO™ toolkit package
From Docker Hub From Docker Hub

Start DL Workbench

This section provides instructions to run the DL Workbench on Linux from Docker Hub.

Use the command below to pull the latest Docker image with the application and run it:

wget https://raw.githubusercontent.com/openvinotoolkit/workbench_aux/master/start_workbench.sh && bash start_workbench.sh

DL Workbench uses authentication tokens to access the application. A token is generated automatically and displayed in the console output when you run the container for the first time. Once the command is executed, follow the link with the token. The Get Started page opens:

For details and more installation options, visit the links below:

OpenVINO™ DL Workbench Workflow Overview

The simplified OpenVINO™ DL Workbench workflow is:

  1. Get a trained model for your inference task. Example inference tasks: pedestrian detection, face detection, vehicle detection, license plate recognition, head pose.
  2. Run the trained model through the Model Optimizer to convert the model to an Intermediate Representation, which consists of a pair of .xml and .bin files that are used as the input for Inference Engine.
  3. Run inference against the Intermediate Representation (optimized model) and output inference results.

Run Baseline Inference

This section illustrates a sample use case of how to infer a pretrained model from the Intel® Open Model Zoo with an autogenerated noise dataset on a CPU device.

Once you log in to the DL Workbench, create a project, which is a combination of a model, a dataset, and a target device. Follow the steps below:

Step 1. Open a New Project

On the the Active Projects page, click Create to open the Create Project page:

Step 2. Choose a Pretrained Model

Click Import next to the Model table on the Create Project page. The Import Model page opens. Select the squeezenet1.1 model from the Open Model Zoo and click Import.

Step 3. Convert the Model into Intermediate Representation

The Convert Model to IR tab opens. Keep the FP16 precision and click Convert.

You are directed back to the Create Project page where you can see the status of the chosen model.

Step 4. Generate a Noise Dataset

Scroll down to the Validation Dataset table. Click Generate next to the table heading.

The Autogenerate Dataset page opens. Click Generate.

You are directed back to the Create Project page where you can see the status of the dataset.

Step 5. Create the Project and Run a Baseline Inference

On the Create Project page, select the imported model, CPU target, and the generated dataset. Click Create.

The inference starts and you cannot proceed until it is done.

Once the inference is complete, the Projects page opens automatically. Find your inference job in the Projects Settings table indicating all jobs.

Congratulations, you have performed your first inference in the OpenVINO DL Workbench. Now you can proceed to:

For detailed instructions to create a new project, visit the links below:

Additional Resources