Install Intel® Distribution of OpenVINO™ toolkit for Linux from a Docker Image¶
This guide provides steps on creating a Docker image with Intel® Distribution of OpenVINO™ toolkit for Linux and using the image on different devices.
System Requirements¶
Operating System |
Included Python Version |
---|---|
Ubuntu 18.04 long-term support (LTS), 64-bit |
3.8 |
Ubuntu 20.04 long-term support (LTS), 64-bit |
3.8 |
Red Hat Enterprise Linux 8, 64-bit |
3.8 |
Linux
Windows Subsystem for Linux 2 (WSL2) on CPU or GPU
macOS on CPU only
To launch a Linux image on WSL2 when trying to run inferences on a GPU, make sure that the following requirements are met:
Only Windows 10 with 21H2 update or above installed and Windows 11 are supported.
- Intel GPU driver for Windows, version 30.0.100.9684 or newer needs to be installed. For more details, refer to
Currently, the Docker images contain preinstalled recommended version of OpenCL Runtime with WSL2 support.
Installation Flow¶
There are two ways to install OpenVINO with Docker. You can choose either of them according to your needs:
Use a prebuilt image. Do the following steps:
Run the image on different devices. To run inferences on Intel® Vision Accelerator Design with Intel® Movidius™ VPUs, configure the Docker image first before you run the image.
If you want to customize your image, you can also build a Docker image manually by using the following steps:
Getting a Prebuilt Image from Provided Sources¶
You can find prebuilt images on:
Preparing a Dockerfile¶
You can use the available Dockerfiles on GitHub or generate a Dockerfile with your settings via DockerHub CI Framework which can generate a Dockerfile, build, test and deploy an image with the Intel® Distribution of OpenVINO™ toolkit. You can also try our Tutorials which demonstrate the usage of Docker containers with OpenVINO.
Configuring the Image for Different Devices¶
If you want to run inference on a CPU or Intel® Neural Compute Stick 2, no extra configuration is needed. Go to Running the image on different devices for the next step.
Configuring Docker Image for GPU¶
If you want to run inference on a GPU, follow the instructions provided in the guide on Configuration for Intel GPU
Configuring Docker Image for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs¶
Note
When building the Docker image, create a user in the Dockerfile that has the same UID (User Identifier) and GID (Group Identifier) as the user which that runs hddldaemon on the host, and then run the application in the Docker image with this user. This step is necessary to run the container as a non-root user.
To s:use the Docker container for inference on Intel® Vision Accelerator Design with Intel® Movidius™ VPUs, do the following:
Set up the environment on the host machine to be used for running Docker. It is required to execute
hddldaemon
, which is responsible for communication between the HDDL plugin and the board. To learn how to set up the environment (the OpenVINO package or HDDL package must be pre-installed), see Configuration guide for HDDL device or Configurations for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs on Linux.Run
hddldaemon
on the host in a separate terminal session using the following command:$HDDL_INSTALL_DIR/hddldaemon
Running the Docker Image on Different Devices¶
Running the Image on CPU¶
Run the Docker image with the following command:
docker run -it --rm <image_name>
Note the following things:
Kernel reports the same information for all containers as for native application, for example, CPU, memory information.
All instructions that are available to host process available for process in container, including, for example, AVX2, AVX512. No restrictions.
Docker does not use virtualization or emulation. The process in Docker is just a regular Linux process, but it is isolated from external world on kernel level. Performance loss is minor.
Running the Image on GPU¶
Note
Only Intel® integrated graphics are supported.
Note the following things:
GPU is not available in the container by default. You must attach it to the container.
Kernel driver must be installed on the host.
In the container, non-root user must be in the
video
andrender
groups. To add a user to the render group, follow the Configuration Guide for the Intel® Graphics Compute Runtime for OpenCL™ on Ubuntu 20.04.
To make GPU available in the container, attach the GPU to the container using --device /dev/dri
option and run the container:
Ubuntu 18 or RHEL 8:
docker run -it --rm --device /dev/dri <image_name>
Note
If your host system is Ubuntu 20, follow the Configuration Guide for the Intel® Graphics Compute Runtime for OpenCL™ on Ubuntu* 20.04.
WSL2:
docker run -it --rm --device /dev/dxg --volume /usr/lib/wsl:/usr/lib/wsl <image_name>
Note
To launch a Linux image on WSL2, make sure that the additional requirements in System Requirements are met.
Running the Image on Intel® Neural Compute Stick 2¶
Run the Docker image with the following command:
docker run -it --rm --device-cgroup-rule='c 189:\* rmw' -v /dev/bus/usb:/dev/bus/usb <image_name>
While the command above is not working, you can also run container in the privileged mode, enable the Docker network configuration as host, and mount all devices to the container. Run the following command:
docker run -it --rm --privileged -v /dev:/dev --network=host <image_name>
Note
This option is not recommended, as conflicts with Kubernetes and other tools that use orchestration and private networks may occur. Please use it with caution and only for troubleshooting purposes.
Known Limitations¶
Intel® Neural Compute Stick 2 device changes its VendorID and DeviceID during execution and each time looks for a host system as a brand new device. It means it cannot be mounted as usual.
UDEV events are not forwarded to the container by default, and it does not know about the device reconnection. The prebuilt Docker images and provided Dockerfiles include
libusb
rebuilt without UDEV support.Only one NCS2 device connected to the host can be used when running inference in a container.
Running the Image on Intel® Vision Accelerator Design with Intel® Movidius™ VPUs¶
Note
To run inferences on Intel® Vision Accelerator Design with Intel® Movidius™ VPUs, make sure that you have configured the Docker image first.
Use the following command:
docker run -it --rm --device=/dev/ion:/dev/ion -v /var/tmp:/var/tmp <image_name>
If your application runs inference of a network with a big size (>4MB) of input/output, the HDDL plugin will use shared memory. In this case, you must mount /dev/shm
as volume:
docker run -it --rm --device=/dev/ion:/dev/ion -v /var/tmp:/var/tmp -v /dev/shm:/dev/shm <image_name>
Note the following things:
The device
/dev/ion
needs to be shared to be able to use ion buffers among the plugin,hddldaemon
and the kernel.Since separate inference tasks share the same HDDL service communication interface (the service creates mutexes and a socket file in
/var/tmp
),/var/tmp
needs to be mounted and shared among them.
If the ion Driver is Not Enabled¶
In some cases, the ion driver is not enabled (for example, due to a newer kernel version or iommu (Input-Output Memory Management Unit) incompatibility). lsmod | grep myd_ion
returns empty output. To resolve this issue, use the following command:
docker run -it --rm --ipc=host --net=host -v /var/tmp:/var/tmp <image_name>
If that still does not solve the issue, try starting hddldaemon
with the root user on host. However, this approach is not recommended. Please use with caution.
Running Samples in Docker Image¶
To run the Hello Classification Sample
on a specific inference device, run the following commands:
CPU :
docker run -it --rm <image_name>
/bin/bash -c "cd ~ && omz_downloader --name googlenet-v1 --precisions FP16 && omz_converter --name googlenet-v1 --precision FP16 && curl -O https://storage.openvinotoolkit.org/data/test_data/images/car_1.bmp && python3 /opt/intel/openvino/samples/python/hello_classification/hello_classification.py public/googlenet-v1/FP16/googlenet-v1.xml car_1.bmp CPU"
GPU :
docker run -itu root:root --rm --device /dev/dri:/dev/dri <image_name>
/bin/bash -c "omz_downloader --name googlenet-v1 --precisions FP16 && omz_converter --name googlenet-v1 --precision FP16 && curl -O https://storage.openvinotoolkit.org/data/test_data/images/car_1.bmp && python3 samples/python/hello_classification/hello_classification.py public/googlenet-v1/FP16/googlenet-v1.xml car_1.bmp GPU"
MYRIAD :
docker run -itu root:root --rm --device-cgroup-rule='c 189:\* rmw' -v /dev/bus/usb:/dev/bus/usb <image_name>
/bin/bash -c "omz_downloader --name googlenet-v1 --precisions FP16 && omz_converter --name googlenet-v1 --precision FP16 && curl -O https://storage.openvinotoolkit.org/data/test_data/images/car_1.bmp && python3 samples/python/hello_classification/hello_classification.py public/googlenet-v1/FP16/googlenet-v1.xml car_1.bmp MYRIAD"
HDDL :
docker run -itu root:root --rm --device=/dev/ion:/dev/ion -v /var/tmp:/var/tmp -v /dev/shm:/dev/shm <image_name>
/bin/bash -c "omz_downloader --name googlenet-v1 --precisions FP16 && omz_converter --name googlenet-v1 --precision FP16 && curl -O https://storage.openvinotoolkit.org/data/test_data/images/car_1.bmp && umask 000 && python3 samples/python/hello_classification/hello_classification.py public/googlenet-v1/FP16/googlenet-v1.xml car_1.bmp HDDL"
Additional Resources¶
DockerHub CI Framework for Intel® Distribution of OpenVINO™ toolkit. The Framework can generate a Dockerfile, build, test, and deploy an image with the Intel® Distribution of OpenVINO™ toolkit. You can reuse available Dockerfiles, add your layer and customize the image of OpenVINO™ for your needs.
Intel® Distribution of OpenVINO™ toolkit home page: https://software.intel.com/en-us/openvino-toolkit
Intel® Neural Compute Stick 2 Get Started: https://software.intel.com/en-us/neural-compute-stick/get-started