This version of the Deep Learning Workbench (DL Workbench) is a feature preview release. This documentation and corresponding functionality within the application are subject to change and can contain errata or be inaccurate. Installing, hosting and using this application is at your own risk.

DL Workbench is a web-based graphical environment that enables users to visualize, fine-tune, and compare performance of deep learning models on various Intel® architecture configurations, such as CPU, Intel® Processor Graphics (GPU), Intel® Movidius™ Neural Compute Stick 2 (NCS 2), and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs.

The intuitive web-based interface of the DL Workbench allows you to easily use various sophisticated OpenVINO™ toolkit components:

To get started, follow the Installation Guide. DL Workbench uses authentication tokens to access the application. A token is generated automatically and displayed in the console output when you run the container for the first time.

General Workflow

To start a new project, select Get Started on the home page:


Create a new project configuration on the Configurations page. A configuration includes:

Once you import and configure a model and dataset, you can experiment with the model to identify the model performance and optimal parameters to achieve the maximum performance on Intel® hardware:

  1. Calibrate the model in INT8 precision
  2. Find the best combination of inference parameters: number of streams and batches
  3. Analyze inference results and compare them across different configurations
  4. Implement an optimal configuration into your application

Core Use Cases

DL Workbench supports several advanced profiling scenarios:

Table of Contents