OpenVINO™ Deep Learning Workbench Overview

Deep Learning Workbench (DL Workbench) is an official OpenVINO™ graphical interface designed to make the production of pretrained deep learning Computer Vision and Natural Language Processing models significantly easier.

Minimize the inference-to-deployment workflow timing for neural models right in your browser: import a model, analyze its performance and accuracy, visualize the outputs, optimize and make the final model deployment-ready in a matter of minutes. DL Workbench takes you through the full OpenVINO™ workflow, providing the opportunity to learn about various toolkit components.

_images/openvino_dl_wb.png

Run DL Workbench in Intel® DevCloud

DL Workbench enables you to get a detailed performance assessment, explore inference configurations, and obtain an optimized model ready to be deployed on various Intel® configurations, such as client and server CPU, Intel® Processor Graphics (GPU), Intel® Movidius™ Neural Compute Stick 2 (NCS 2), and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs.

DL Workbench also provides the JupyterLab environment that helps you quick start with OpenVINO™ API and command-line interface (CLI). Follow the full OpenVINO workflow created for your model and learn about different toolkit components.

Video

DL Workbench Introduction. Duration: 1:31

User Goals

DL Workbench helps achieve your goals depending on the stage of your deep learning journey.

If you are a beginner in the deep learning field, the DL Workbench provides you with learning opportunities:

  • Learn what neural networks are, how they work, and how to examine their architectures.

  • Learn the basics of neural network analysis and optimization before production.

  • Get familiar with the OpenVINO™ ecosystem and its main components without installing it on your system.

If you have enough experience with neural networks, DL Workbench provides you with a convenient web interface to optimize your model and prepare it for production:

  • Measure and interpret model performance.

  • Tune the model for enhanced performance.

  • Analyze the quality of your model and visualize output.

General Workflow

The diagram below illustrates the typical DL Workbench workflow. Click to see the full-size image:

_images/openvino_dl_wb_diagram_overview.svg

Get a quick overview of the workflow in the DL Workbench User Interface:

_images/openvino_dl_wb_workflow.gif

OpenVINO™ Toolkit Components

The intuitive web-based interface of the DL Workbench enables you to easily use various OpenVINO™ toolkit components:

Component

Description

Open Model Zoo

Get access to the collection of high-quality pre-trained deep learning public and Intel-trained models trained to resolve a variety of different tasks.

Model Optimizer

Optimize and transform models trained in supported frameworks to the IR format. Supported frameworks include TensorFlow*, Caffe*, Kaldi*, MXNet*, and ONNX* format.

Benchmark Tool

Estimate deep learning model inference performance on supported devices.

Accuracy Checker

Evaluate the accuracy of a model by collecting one or several metric values.

Post-Training Optimization Tool

Optimize pretrained models with lowering the precision of a model from floating-point precision(FP32 or FP16) to integer precision (INT8), without the need to retrain or fine-tune models.

Run DL Workbench in Intel® DevCloud

Contact Us

Minimize the inference-to-deployment workflow timing for neural models right in your browser: import a model, analyze its performance and accuracy, visualize the outputs, optimize and make the final model deployment-ready in a matter of minutes. DL Workbench takes you through the full OpenVINO™ workflow, providing the opportunity to learn about various toolkit components.

_images/openvino_dl_wb.png

Install DL Workbench

DL Workbench enables you to get a detailed performance assessment, explore inference configurations, and obtain an optimized model ready to be deployed on various Intel® configurations, such as client and server CPU, Intel® Processor Graphics (GPU), Intel® Movidius™ Neural Compute Stick 2 (NCS 2), and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs.

DL Workbench also provides the JupyterLab environment that helps you quick start with OpenVINO™ API and command-line interface (CLI). Follow the full OpenVINO workflow created for your model and learn about different toolkit components.

Video

DL Workbench Introduction. Duration: 1:31

User Goals

DL Workbench helps achieve your goals depending on the stage of your deep learning journey.

If you are a beginner in the deep learning field, the DL Workbench provides you with learning opportunities:

  • Learn what neural networks are, how they work, and how to examine their architectures.

  • Learn the basics of neural network analysis and optimization before production.

  • Get familiar with the OpenVINO™ ecosystem and its main components without installing it on your system.

If you have enough experience with neural networks, DL Workbench provides you with a convenient web interface to optimize your model and prepare it for production:

  • Measure and interpret model performance.

  • Tune the model for enhanced performance.

  • Analyze the quality of your model and visualize output.

General Workflow

The diagram below illustrates the typical DL Workbench workflow. Click to see the full-size image:

_images/dl_wb_diagram_overview.svg

Get a quick overview of the workflow in the DL Workbench User Interface:

_images/workflow_DL_Workbench.gif

OpenVINO™ Toolkit Components

The intuitive web-based interface of the DL Workbench enables you to easily use various OpenVINO™ toolkit components:

Component

Description

Open Model Zoo

Get access to the collection of high-quality pre-trained deep learning public and Intel-trained models trained to resolve a variety of different tasks.

Model Optimizer

Optimize and transform models trained in supported frameworks to the IR format. Supported frameworks include TensorFlow*, Caffe*, Kaldi*, MXNet*, and ONNX* format.

Benchmark Tool

Estimate deep learning model inference performance on supported devices.

Accuracy Checker

Evaluate the accuracy of a model by collecting one or several metric values.

Post-Training Optimization Tool

Optimize pretrained models with lowering the precision of a model from floating-point precision(FP32 or FP16) to integer precision (INT8), without the need to retrain or fine-tune models.

Install DL Workbench