Introduction

This version of Deep Learning Workbench is a feature preview release. As a result, this documentation and corresponding functionality within the application are subject to change and can contain errata or be inaccurate. Installing, hosting and using this application is at your own risk.

DL Workbench is a web-based graphical environment that allows users to visualize a simulation of performance of deep learning models and datasets on various Intel® architecture configurations (CPU, GPU, VPU). In addition, users can automatically fine-tune the performance of an OpenVINO™ model by reducing the precision of certain model layers (quantization) from FP32 to INT8. Additional tuning algorithms will be supported in future releases.

To get started, follow the Installation Guide. Then go to https://127.0.0.1:5665 in your web browser. Google Chrome* 72 or higher is recommended.

To learn more about precision tuning for optimized performance on Intel® architecture, refer to the Calibration tool documentation.

General Workflow

To start a new project, select Get Started on the home page.

Create a new project configuration through the One-Page Wizard:

Then, profile a configuration with the following steps:

  1. Experiment with model optimization and inference options
  2. Analyze inference results
  3. Apply an optimal configuration to your application

Pages Overview

DL Workbench is a web application running on a Linux* server that provides an intuitive graphic interface for performance profiling and tuning of deep learning models.

When you go to the DL Workbench URL (by default - http://127.0.0.1:5665), you see a Get Started page inviting you to start a new project.

Get_Started_Page.png

Once you import and configure a model and dataset, you can perform inference experiments to identify the model performance and optimal parameters to achieve the maximum performance on Intel® hardware.

Configuration Tab

Use this tab to create and manage a profile configuration for initial inference of your model that will be used as a reference for further optimization. Configuration includes:

Core Use Cases Overview

DL Workbench supports several advanced profiling scenarios:

Table of Contents