Install the DL Workbench

This section contains instructions to run the DL Workbench on your local system. Running the DL Workbench on your local system enables you:

  • To profile your neural network on your own hardware configuration, as well as connect to targets in your local network and profile on them remotely.
  • To have access to an extended feature list including accuracy measurements and Winograd algorithmic tuning.
  • Not to compete for resources with other Intel® DevCloud for the Edge users. As a result, your experiments are conducted faster.

You can also run the DL Workbench in the Intel® DevCloud for the Edge, which is a great option for the following scenarios:

  • You want to profile your neural network on various Intel® hardware configurations hosted in the cloud environment without any hardware setup at your end and integrate the optimized model in the friendly environment of JupyterLab*.

    OR
  • You want to start using the DL Workbench and explore its features without any hardware setup at your end.

Prerequisites

The minimum requirements are sufficient to run baseline inference on most models. Use the recommended requirements to make sure all features are usable.

Minimum Requirements

Prerequisite Linux* Windows* macOS*
Operating system Ubuntu* 18.04. Other Linux distributions, such as Ubuntu* 16.04 and CentOS* 7, are not validated. Windows* 10 macOS* 10.15 Catalina
CPU Intel® Core™ i5 Intel® Core™ i5 Intel® Core™ i5
GPU Intel® Pentium® processor N4200/5 with Intel® HD Graphics Not supported Not supported
HDDL, Myriad Intel® Neural Compute Stick 2
Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
Not supported Not supported
Available RAM space 4 GB 4 GB 4 GB
Available storage space 8 GB + space for imported artifacts 8 GB + space for imported artifacts 8 GB + space for imported artifacts
Docker* Docker CE 18.06.1 Docker Desktop 2.1.0.1 Docker CE 18.06.1
Web browser Google Chrome* 76
Browsers like Mozilla Firefox* 71 or Apple Safari* 12 are not validated.
Microsoft Internet Explorer* is not supported.
Google Chrome* 76
Browsers like Mozilla Firefox* 71 or Apple Safari* 12 are not validated.
Microsoft Internet Explorer* is not supported.
Google Chrome* 76
Browsers like Mozilla Firefox* 71 or Apple Safari* 12 are not validated.
Microsoft Internet Explorer* is not supported.
Resolution 1440 x 890 1440 x 890 1440 x 890
Internet Optional Optional Optional
Installation method From Docker Hub
From OpenVINO™ toolkit package
From Docker Hub From Docker Hub

Recommended Requirements

Prerequisite Linux* Windows* macOS*
Operating system Ubuntu* 18.04 Windows* 10 macOS* 10.15 Catalina
CPU Intel® Core™ i7 Intel® Core™ i7 Intel® Core™ i7
GPU Intel® Pentium® processor N4200/5 with Intel® HD Graphics Not supported Not supported
HDDL, Myriad Intel® Neural Compute Stick 2
Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
Not supported Not supported
Available RAM space 16 GB** 16 GB** 16 GB**
Available storage space 10 GB + space for imported artifacts 10 GB + space for imported artifacts 10 GB + space for imported artifacts
Docker* Docker CE 18.06.1 Docker Desktop 2.3.0.3 Docker CE 18.06.1
Web browser Google Chrome* 83 Google Chrome* 83 Google Chrome* 83
Resolution 1440 x 890 1440 x 890 1440 x 890
Internet Required Required Required
Installation method From Docker Hub
From OpenVINO toolkit package
From Docker Hub From Docker Hub

** You need more space if you optimize or measure accuracy of computationally expensive models, such as mask_rcnn_inception_v2_coco or faster-rcnn-resnet101-coco-sparse-60-0001.

Supported Inference Devices

The DL Workbench supports various Intel® architectures:

Code name in DL Workbench Plugin name in Inference Engine Examples of devices
CPU CPU Intel® Xeon® with Intel® AVX2 and AVX512, Intel® Core™ processors with Intel® AVX2, Intel Atom® processors with Intel® SSE
GPU GPU Intel® Processor Graphics, including Intel® HD Graphics and Intel® Iris® Graphics
MYRIAD MYRIAD Intel® Movidius™ Neural Compute Stick 2
HDDL HDDL Intel® Vision Accelerator Design with Intel® Movidius™ VPUs

For more information, see Introduction to Inference Engine.

Installation Methods

Use one of these methods to install the DL Workbench:

For other options, like launching the DL Workbench container or restarting the container, see Advanced Topics.


See Also