The OpenVINO™ toolkit optimizes and runs Deep Learning Neural Network models on Intel® hardware. This guide helps you get started with the OpenVINO™ toolkit you installed on Raspbian* OS.
In this guide, you will:
On Raspbian* OS, the OpenVINO™ toolkit consists of the following components:
NOTE:
- The OpenVINO™ package for Raspberry* does not include the Model Optimizer. To convert models to Intermediate Representation (IR), you need to install it separately to your host machine.
- The package does not include the Open Model Zoo demo applications. You can download them separately from the Open Models Zoo repository.
In addition, code samples are provided to help you get up and running with the toolkit.
This guide assumes you completed all Intel® Distribution of OpenVINO™ toolkit installation and configuration steps. If you have not yet installed and configured the toolkit, see Install Intel® Distribution of OpenVINO™ toolkit for Raspbian*.
The OpenVINO toolkit for Raspbian* OS is distributed without installer. This document refers to the directory to which you unpacked the toolkit package as <INSTALL_DIR>
.
The primary tools for deploying your models and applications are installed to the <INSTALL_DIR>/deployment_tools
directory.
Click for the
deployment_tools
directory structure
Directory | Description |
---|---|
inference_engine/ | Inference Engine directory. Contains Inference Engine API binaries and source files, samples and extensions source files, and resources like hardware drivers. |
external/ | Third-party dependencies and drivers. |
include/ | Inference Engine header files. For API documentation, see the Inference Engine API Reference. |
lib/ | Inference Engine libraries. |
samples/ | Inference Engine samples. Contains source code for C++ and Python* samples and build scripts. See the Inference Engine Samples Overview. |
share/ | CMake configuration files for linking with Inference Engine. |
The OpenVINO™ workflow on Raspbian* OS is as follows:
.bin
and .xml
Intermediate Representation (IR) files, which are used as input by Inference Engine. On Raspberry PI, OpenVINO™ toolkit includes only the Inference Engine module. The Model Optimizer is not supported on this platform. To get the optimized models you can use one of the following options:Follow the steps below to run pre-trained Face Detection network using Inference Engine samples from the OpenVINO toolkit.
build
: out_0.bmp
) with detected faced enclosed in rectangles.Following are some basic guidelines for executing the OpenVINO™ workflow using the code samples:
Use these resources to learn more about the OpenVINO™ toolkit: