# Install OpenVINO™ Runtime for macOS from Installer¶

Note

Since the OpenVINO™ 2022.1 release, the following development tools: Model Optimizer, Post-Training Optimization Tool, Model Downloader and other Open Model Zoo tools, Accuracy Checker, and Annotation Converter are not part of the installer. These tools are now only available on pypi.org.

Note

The Intel® Distribution of OpenVINO™ toolkit is supported on macOS version 10.15 with Intel® processor-based machines.

## System Requirements¶

macOS 10.15

Optimized for these processors:

• 6th to 12th generation Intel® Core™ processors and Intel® Xeon® processors

• 3rd generation Intel® Xeon® Scalable processor (formerly code named Cooper Lake)

• Intel® Xeon® Scalable processor (formerly Skylake and Cascade Lake)

• Intel® Neural Compute Stick 2

Note

The current version of the Intel® Distribution of OpenVINO™ toolkit for macOS supports inference on Intel CPUs and Intel® Neural Compute Stick 2 devices only.

• CMake 3.13 or higher (choose “macOS 10.13 or later”). Add /Applications/CMake.app/Contents/bin to path (for default install).

• Python 3.6 - 3.9 (choose 3.6 - 3.9). Install and add to path.

• Apple Xcode Command Line Tools. In the terminal, run xcode-select –install from any directory

• (Optional) Apple Xcode IDE (not required for OpenVINO™, but useful for development)

## Overview¶

This guide provides step-by-step instructions on how to install the Intel® Distribution of OpenVINO™ toolkit for macOS. The following steps will be covered:

## Step 1: Install the Intel® Distribution of OpenVINO™ Toolkit Core Components¶

1. Download the Intel® Distribution of OpenVINO™ toolkit package file from Intel® Distribution of OpenVINO™ toolkit for macOS. Select the Intel® Distribution of OpenVINO™ toolkit for macOS package from the dropdown menu.

2. Go to the directory where you downloaded the Intel® Distribution of OpenVINO™ toolkit. This document assumes this is your Downloads directory. By default, the disk image file is saved as m_openvino_toolkit_p_<version>.dmg.

3. Double-click the m_openvino_toolkit_p_<version>.dmg file to mount. The disk image is mounted to /Volumes/m_openvino_toolkit_p_<version> and automatically opens in a separate window.

4. Run the installation wizard application bootstrapper.app. You should see the following dialog box open up:

Click on the image to see the details.

By default, the Intel® Distribution of OpenVINO™ is installed in the following directory, referred to as <INSTALL_DIR> elsewhere in the documentation:

/opt/intel/openvino_<version>/

For simplicity, a symbolic link to the latest installation is also created: /opt/intel/openvino_2022/.

The core components are now installed. Continue to the next section to configure environment.

## Step 2: Configure the Environment¶

You must update several environment variables before you can compile and run OpenVINO™ applications. Set environment variables as follows:

source <INSTALL_DIR>/setupvars.sh

If you have more than one OpenVINO™ version on your machine, you can easily switch its version by sourcing setupvars.sh of your choice.

Note

You can also run this script every time when you start new terminal session. Open ~/.bashrc in your favorite editor, and add source <INSTALL_DIR>/setupvars.sh. Next time when you open a terminal, you will see [setupvars.sh] OpenVINO™ environment initialized. Changing .bashrc is not recommended when you have many OpenVINO™ versions on your machine and want to switch among them, as each may require different setup.

The environment variables are set. Continue to the next section if you want to download any additional components.

Note

Since the OpenVINO™ 2022.1 release, the following development tools: Model Optimizer, Post-Training Optimization Tool, Model Downloader and other Open Model Zoo tools, Accuracy Checker, and Annotation Converter are not part of the installer. The OpenVINO™ Development Tools can only be installed via PyPI now. See Install OpenVINO™ Development Tools for detailed steps.

## Step 4 (Optional): Configure the Intel® Neural Compute Stick 2¶

If you want to run inference on Intel® Neural Compute Stick 2 use the following instructions to setup the device: NCS2 Setup Guide.

## Step 5: What’s next?¶

Now you are ready to try out the toolkit. You can use the following tutorials to write your applications using Python and C++.