Introduction to Deep Learning Workbench

DL Workbench is a web-based graphical environment that enables you to visualize, fine-tune, and compare performance of deep learning models on various Intel® architecture configurations, such as CPU, Intel® Processor Graphics (GPU), Intel® Movidius™ Neural Compute Stick 2 (NCS 2), and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs.

User Goals

DL Workbench helps achieve your goals depending on the stage of your deep learning journey.

If you are a beginner in the deep learning field, the DL Workbench provides you with learning opportunities:

  • Learn what neural networks are, how they work, and how to examine their architectures.
  • Learn the basics of neural network analysis and optimization before production.
  • Get familiar with the OpenVINO™ ecosystem and its main components.

If you have enough experience with neural networks, the DL Workbench provides you with a user-friendly UI web interface to optimize your model and prepare it for production.

The intuitive web-based interface of the DL Workbench enables you to easily use various OpenVINO™ toolkit components:

Besides, the DL Workbench provides the Jupyter Playground with tutorials on using OpenVINO™, its Python* API, and its components that help you analyze and optimize your models. The playground enables you to quick start with OpenVINO™ in a preconfigured environment.

General Workflow

The diagram below illustrates the typical DL Workbench workflow:

TIP: Click to see the full-size image.

Create a project, which includes:

  • Pretrained model
  • Validation dataset to run inference on
  • Target device

Then you can experiment with the model to identify the model performance and optimal parameters to achieve the maximum performance on Intel® hardware:

  1. Calibrate the model in INT8 precision
  2. Find the best combination of inference parameters: number of streams and batches
  3. Analyze inference results and compare them across different configurations
  4. Implement an optimal configuration into your application

Core Use Cases

DL Workbench supports several advanced profiling scenarios:

Start Using the DL Workbench

DL Workbench is available to be installed and run locally and also in the Intel® DevCloud for the Edge:

  • Running the DL Workbench on your local system enables you to profile your neural network on your own hardware configuration, as well as connect to targets in your local network and profile on them remotely. You have access to an extended feature list including accuracy measurements and Winograd algorithmic tuning. You also do not compete for resources with other Intel® DevCloud for the Edge users. As a result, your experiments are conducted faster.
    To get started, follow the Installation Guide. DL Workbench uses authentication tokens to access the application. A token is generated automatically and displayed in the console output when you run the container for the first time.
  • Running the DL Workbench in the Intel® DevCloud for the Edge enables you to profile your neural network on various Intel® hardware configurations hosted in the cloud environment without any hardware setup at your end and integrate the optimized model in the friendly environment of JupyterLab*. You can also choose this option if you just want to get familiar with the DL Workbench and explore its features.
    To get started, follow the instructions in Run DL Workbench in the Intel® DevCloud for the Edge.

Contact Us