Install Intel® Distribution of OpenVINO™ toolkit for Linux from a Docker Image

This guide provides steps on creating a Docker image with Intel® Distribution of OpenVINO™ toolkit for Linux and using the image on different devices.

System Requirements

Operating System

Included Python Version

Ubuntu 18.04 long-term support (LTS), 64-bit


Ubuntu 20.04 long-term support (LTS), 64-bit


Red Hat Enterprise Linux 8, 64-bit


  • Linux

  • Windows Subsystem for Linux 2 (WSL2) on CPU or GPU

  • macOS on CPU only

To launch a Linux image on WSL2 when trying to run inferences on a GPU, make sure that the following requirements are met:

  • Only Windows 10 with 21H2 update or above installed and Windows 11 are supported.

  • Intel GPU driver on Windows host with version or above need be installed. Please see this article for more details.

  • From 2022.1 release, the Docker images contain preinstalled recommended version of OpenCL Runtime with WSL2 support.

Installation Flow

There are two ways to install OpenVINO with Docker. You can choose either of them according to your needs:

Preparing a Dockerfile

You can use the available Dockerfiles on GitHub or generate a Dockerfile with your settings via DockerHub CI Framework which can generate a Dockerfile, build, test and deploy an image with the Intel® Distribution of OpenVINO™ toolkit. You can also try our Tutorials which demonstrate the usage of Docker containers with OpenVINO.

Configuring the Image for Different Devices

If you want to run inferences on a CPU no extra configuration is needed. Go to Running the image on different devices for the next step.

Configuring Docker Image for GPU

By default, the distributed Docker image for OpenVINO has the recommended version of Intel® Graphics Compute Runtime for oneAPI Level Zero and OpenCL Driver for the operating system installed inside. If you want to build an image with a custom version of OpenCL Runtime included, you need to modify the Dockerfile using the lines below (the 19.41.14441 version is used as an example) and build the image manually:

Ubuntu 18.04/20.04 :

WORKDIR /tmp/opencl
RUN useradd -ms /bin/bash -G video,users openvino && \
    chown openvino -R /home/openvino

RUN apt-get update && \
    apt-get install -y --no-install-recommends ocl-icd-libopencl1 && \
    rm -rf /var/lib/apt/lists/\* && \
    curl -L "" --output "intel-gmmlib_19.3.2_amd64.deb" && \
    curl -L "" --output "intel-igc-core_1.0.2597_amd64.deb" && \
    curl -L "" --output "intel-igc-opencl_1.0.2597_amd64.deb" && \
    curl -L "" --output "intel-opencl_19.41.14441_amd64.deb" && \
    curl -L "" --output "intel-ocloc_19.04.12237_amd64.deb" && \
    dpkg -i /tmp/opencl/\*.deb && \
    ldconfig && \
    rm /tmp/opencl

RHEL 8 :

WORKDIR /tmp/opencl
RUN useradd -ms /bin/bash -G video,users openvino && \
    chown openvino -R /home/openvino
RUN groupmod -g 44 video

RUN yum update -y && yum install -y && \
    yum update -y && yum install -y ocl-icd ocl-icd-devel && \
    yum clean all && rm -rf /var/cache/yum && \
    curl -L -o intel-gmmlib-19.3.2-1.el7.x86_64.rpm && \
    curl -L -o intel-gmmlib-devel-19.3.2-1.el7.x86_64.rpm && \
    curl -L -o intel-igc-core-1.0.2597-1.el7.x86_64.rpm && \
    curl -L -o intel-igc-opencl-1.0.2597-1.el7.x86_64.rpm && \
    curl -L -o  intel-igc-opencl-devel-1.0.2597-1.el7.x86_64.rpm && \
    curl -L -o intel-opencl-19.41.14441-1.el7.x86_64.rpm \
    rpm -ivh ${TEMP_DIR}/\*.rpm && \
    ldconfig && \
    rm -rf ${TEMP_DIR} && \
    yum remove -y epel-release

Running the Docker Image on Different Devices

Running the Image on CPU

Run the Docker image with the following command:

docker run -it --rm <image_name>

Note the following things:

  • Kernel reports the same information for all containers as for native application, for example, CPU, memory information.

  • All instructions that are available to host process available for process in container, including, for example, AVX2, AVX512. No restrictions.

  • Docker does not use virtualization or emulation. The process in Docker is just a regular Linux process, but it is isolated from external world on kernel level. Performance loss is minor.

Running the Image on GPU


Only Intel® integrated graphics are supported.

Note the following things:

To make GPU available in the container, attach the GPU to the container using --device /dev/dri option and run the container:

Running Samples in Docker Image

To run the Hello Classification Sample on a specific inference device, run the following commands:


docker run -it --rm <image_name>
/bin/bash -c "cd ~ && omz_downloader --name googlenet-v1 --precisions FP16 && omz_converter --name googlenet-v1 --precision FP16 && curl -O && python3 /opt/intel/openvino/samples/python/hello_classification/ public/googlenet-v1/FP16/googlenet-v1.xml car_1.bmp CPU"


docker run -itu root:root  --rm --device /dev/dri:/dev/dri <image_name>
/bin/bash -c "omz_downloader --name googlenet-v1 --precisions FP16 && omz_converter --name googlenet-v1 --precision FP16 && curl -O && python3 samples/python/hello_classification/ public/googlenet-v1/FP16/googlenet-v1.xml car_1.bmp GPU"

Additional Resources