NOTES:
Intel® Distribution of OpenVINO™ toolkit for Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA
Change Notice Begins | July 2020 |
Change Date | October 2020 |
Intel will be transitioning to the next-generation programmable deep-learning solution based on FPGAs in order to increase the level of customization possible in FPGA deep-learning. As part of this transition, future standard releases (i.e., non-LTS releases) of Intel® Distribution of OpenVINO™ toolkit will no longer include the Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA.
Intel® Distribution of OpenVINO™ toolkit 2020.3.X LTS release will continue to support Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA. For questions about next-generation programmable deep-learning solutions based on FPGAs, please talk to your sales representative or contact us to get the latest FPGA updates.
IMPORTANT:
- All steps in this guide are required, unless otherwise stated.
- In addition to the download package, you must install dependencies and complete configuration steps.
Your installation is complete when these are all completed:
NOTE: If you want to use Microsoft Visual Studio 2019, you are required to install CMake 3.14.
IMPORTANT: As part of this installation, make sure you click the option to add the application to your
PATH
environment variable.
The Intel® Distribution of OpenVINO™ toolkit speeds the deployment of applications and solutions that emulate human vision. Based on Convolutional Neural Networks (CNN), the toolkit extends computer vision (CV) workloads across Intel® hardware to maximize performance.
The Intel® Distribution of OpenVINO™ toolkit includes the Intel® Deep Learning Deployment Toolkit (Intel® DLDT). For more information, see the online Intel® Distribution of OpenVINO™ toolkit Overview page.
The Intel® Distribution of OpenVINO™ toolkit for Windows* with FPGA Support:
The following components are installed by default:
Component | Description |
---|---|
Model Optimizer | This tool imports, converts, and optimizes models that were trained in popular frameworks to a format usable by Intel tools, especially the Inference Engine. Popular frameworks include Caffe*, TensorFlow*, MXNet*, and ONNX*. |
Inference Engine | This is the engine that runs the deep learning model. It includes a set of libraries for an easy inference integration into your applications. |
Pre-compiled FPGA bitstream samples | Pre-compiled bitstream samples for the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA, and Intel® Vision Accelerator Design with an Intel® Arria 10 FPGA SG2. |
Intel® FPGA SDK for OpenCL™ software technology | The Intel® FPGA RTE for OpenCL™ provides utilities, host runtime libraries, drivers, and RTE-specific libraries and files |
OpenCV | OpenCV* community version compiled for Intel® hardware |
Inference Engine Code Samples | A set of simple console applications demonstrating how to utilize specific OpenVINO capabilities in an application and how to perform specific tasks, such as loading a model, running inference, querying specific device capabilities, and more. |
Demo Applications | A set of simple console applications that provide robust application templates to help you implement specific deep learning scenarios. |
The development and target platforms have the same requirements, but you can select different components during the installation, based on your intended use.
Hardware
NOTE: With OpenVINO™ 2020.4 release, Intel® Movidius™ Neural Compute Stick is no longer supported.
NOTE: With OpenVINO™ 2020.4 release, Intel® Programmable Acceleration Card (PAC) with Intel® Arria® 10 GX FPGA is no longer supported on Windows.
Processor Notes:
Operating Systems:
Software
NOTE: If you want to use Microsoft Visual Studio 2019, you are required to install CMake 3.14.
Downloads
directory as w_openvino_toolkit_fpga_p_<version>.exe
. Select the Intel® Distribution of OpenVINO™ toolkit for Windows with FPGA Support package from the dropdown menu.Go to the Downloads
folder and double-click w_openvino_toolkit_fpga_p_<version>.exe
. A window opens to let you choose your installation directory and components. You can also select only the bitstreams for your card. This will allow you to minimize the size of the installation by several gigabytes. The default installation directory is C:\Program Files (x86)\IntelSWTools\openvino_<version>
, for simplicity, a shortcut to the latest installation is also created: C:\Program Files (x86)\IntelSWTools\openvino
. If you choose a different installation directory, the installer will create the directory for you. For the default options, the Installation summary GUI screen looks like this::
If you are missing external dependencies, you will see a warning screen. Write down the dependencies you are missing. You need to take no other action at this time. After installing the Intel® Distribution of OpenVINO™ toolkit core components, install the missing dependencies. The screen example below indicates you are missing one dependency:
When the first part of installation is complete, the final screen informs you that the core components have been installed and additional steps still required:
NOTE: If you installed the Intel® Distribution of OpenVINO™ to the non-default install directory, replace
C:\Program Files (x86)\IntelSWTools
with the directory in which you installed the software.
You must update several environment variables before you can compile and run OpenVINO™ applications. Open the Command Prompt, and run the setupvars.bat
batch file to temporarily set your environment variables:
(Optional): OpenVINO toolkit environment variables are removed when you close the Command Prompt window. As an option, you can permanently set the environment variables manually.
The environment variables are set. Continue to the next section to configure the Model Optimizer.
IMPORTANT: These steps are required. You must configure the Model Optimizer for at least one framework. The Model Optimizer will fail if you do not complete the steps in this section.
NOTE: If you see an error indicating Python is not installed when you know you installed it, your computer might not be able to find the program. For the instructions to add Python to your system environment variables, see Update Your Windows Environment Variables.
The Model Optimizer is a key component of the Intel® Distribution of OpenVINO™ toolkit. You cannot do inference on your trained model without running the model through the Model Optimizer. When you run a pre-trained model through the Model Optimizer, your output is an Intermediate Representation (IR) of the network. The IR is a pair of files that describe the whole model:
.xml
: Describes the network topology.bin
: Contains the weights and biases binary dataThe Inference Engine reads, loads, and infers the IR files, using a common API across the CPU, GPU, or VPU hardware.
The Model Optimizer is a Python*-based command line tool (mo.py
), which is located in C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer
. Use this tool on models trained with popular deep learning frameworks such as Caffe*, TensorFlow*, MXNet*, and ONNX* to convert them to an optimized IR format that the Inference Engine can use.
This section explains how to use scripts to configure the Model Optimizer either for all of the supported frameworks at the same time or for individual frameworks. If you want to manually configure the Model Optimizer instead of using scripts, see the Using Manual Configuration Process section on the Configuring the Model Optimizer page.
For more information about the Model Optimizer, see the Model Optimizer Developer Guide.
You can configure the Model Optimizer either for all supported frameworks at once or for one framework at a time. Choose the option that best suits your needs. If you see error messages, make sure you installed all dependencies.
IMPORTANT: The Internet access is required to execute the following steps successfully. If you have access to the Internet through the proxy server only, please make sure that it is configured in your environment.
NOTE: In the steps below:
- If you you want to use the Model Optimizer from another installed versions of Intel® Distribution of OpenVINO™ toolkit installed, replace
openvino
withopenvino_<version>
.- If you installed the Intel® Distribution of OpenVINO™ toolkit to the non-default installation directory, replace
C:\Program Files (x86)\IntelSWTools
with the directory where you installed the software.
These steps use a command prompt to make sure you see error messages.
Open a command prompt. To do so, type cmd
in your Search Windows box and then press Enter. Type commands in the opened window:
The Model Optimizer is configured for one or more frameworks. Success is indicated by a screen similar to this:
You are ready to use two short demos to see the results of running the Intel Distribution of OpenVINO toolkit and to verify your installation was successful. The demo scripts are required since they perform additional configuration steps. Continue to the next section.
If you want to use a GPU or VPU, or update your Windows* environment variables, read through the Optional Steps section.
IMPORTANT: This section is required. In addition to confirming your installation was successful, demo scripts perform other steps, such as setting up your computer to use the Inference Engine samples.
NOTE: The paths in this section assume you used the default installation directory. If you used a directory other than
C:\Program Files (x86)\IntelSWTools
, update the directory with the location where you installed the software.
To verify the installation and compile two samples, run the verification applications provided with the product on the CPU:
To run the script, start the demo_squeezenet_download_convert_run.bat
file:
This script downloads a SqueezeNet model, uses the Model Optimizer to convert the model to the .bin
and .xml
Intermediate Representation (IR) files. The Inference Engine requires this model conversion so it can use the IR as input and achieve optimum performance on Intel hardware.
This verification script builds the Image Classification Sample Async application and run it with the car.png
image in the demo directory. For a brief description of the Intermediate Representation, see Configuring the Model Optimizer.
When the verification script completes, you will have the label and confidence for the top-10 categories:
This demo is complete. Leave the console open and continue to the next section to run the Inference Pipeline demo.
To run the script, start the demo_security_barrier_camera.bat
file while still in the console:
This script downloads three pre-trained model IRs, builds the Security Barrier Camera Demo application, and runs it with the downloaded models and the car_1.bmp
image from the demo
directory to show an inference pipeline. The verification script uses vehicle recognition in which vehicle attributes build on each other to narrow in on a specific attribute.
First, an object is identified as a vehicle. This identification is used as input to the next model, which identifies specific vehicle attributes, including the license plate. Finally, the attributes identified as the license plate are used as input to the third model, which recognizes specific characters in the license plate.
When the demo completes, you have two windows open:
Close the image viewer window to end the demo.
To learn more about the verification scripts, see README.txt
in C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\demo
.
For detailed description of the OpenVINO™ pre-trained object detection and object recognition models, see the Overview of OpenVINO™ toolkit Pre-Trained Models page.
In this section, you saw a preview of the Intel® Distribution of OpenVINO™ toolkit capabilities.
Congratulations. You have completed all the required installation, configuration, and build steps to work with your trained models using CPU.
If you want to use Intel® Processor graphics (GPU), Intel® Neural Compute Stick 2 or Intel® Vision Accelerator Design with Intel® Movidius™ (VPU), or add CMake* and Python* to your Windows* environment variables, read through the next section for additional steps.
If you want to continue and run the Image Classification Sample Application on one of the supported hardware device, see the Run the Image Classification Sample Application section.
Install your compatible hardware from the list of supported components below.
NOTE: Once you've completed your hardware installation, you'll return to this guide to finish installation and configuration of the Intel® Distribution of OpenVINO™ toolkit.
Links to install and configure compatible hardware
Congratulations, you have finished the Intel® Distribution of OpenVINO™ toolkit installation for FPGA. To learn more about how the Intel® Distribution of OpenVINO™ toolkit works, the Hello World tutorial and other resources are provided below.
Use the optional steps below if you want to:
NOTE: These steps are required only if you want to use a GPU.
If your applications offload computation to Intel® Integrated Graphics, you must have the Intel Graphics Driver for Windows version 15.65 or higher. To see if you have this driver installed:
Click the drop-down arrow to view the Display adapters. You see the adapter that is installed in your computer:
Click the Driver tab to see the driver version. Make sure the version number is 15.65 or higher.
You are done updating your device driver and are ready to use your GPU.
NOTE: These steps are required only if you want to use Intel® Vision Accelerator Design with Intel® Movidius™ VPUs.
To perform inference on Intel® Vision Accelerator Design with Intel® Movidius™ VPUs, the following additional installation steps are required:
<INSTALL_DIR>\deployment_tools\inference-engine\external\hddl\SMBusDriver
directory, where <INSTALL_DIR>
is the directory in which the Intel Distribution of OpenVINO toolkit is installed.hddlsmbus.inf
file and choose Install from the pop up menu.You are done installing your device driver and are ready to use your Intel® Vision Accelerator Design with Intel® Movidius™ VPUs.
See also:
After configuration is done, you are ready to run the verification scripts with the HDDL Plugin for your Intel® Vision Accelerator Design with Intel® Movidius™ VPUs.
NOTE: These steps are only required under special circumstances, such as if you forgot to check the box during the CMake* or Python* installation to add the application to your Windows
PATH
environment variable.
Use these steps to update your Windows PATH
if a command you execute returns an error message stating that an application cannot be found. This might happen if you do not add CMake or Python to your PATH
environment variable during the installation.
PATH
, browse to the directory in which you installed CMake. The default directory is C:\Program Files\CMake
.PATH
, browse to the directory in which you installed Python. The default directory is C:\Users\<USER_ID>\AppData\Local\Programs\Python\Python36\Python
.Your PATH
environment variable is updated.
Refer to the OpenVINO™ with FPGA Hello World Face Detection Exercise.
Additional Resources
To learn more about converting models, go to: