When you find an optimal configuration for your model, the next step is to use this model with optimal parameters in your own application on a target device. OpenVINO™ toolkit includes all you need to run the application on the target. However, the target might have a limited drive space to store all OpenVINO™ components. OpenVINO™ Deployment Manager available inside the DL Workbench extracts the minimum set of libraries required for a target device. Refer to the section below to learn how to download a deployment package for your project.
IMPORTANT: Deployment Manager available inside the DL Workbench provides libraries compatible only with Ubuntu 18.04.
Once you download the package, see how to create a binary with your application on your developer machine and deploy it on a target device.
NOTE: The machine where you use the DL Workbench to download the package and where you prepare your own application is a developer machine. The machine where you deploy the application is a target machine.
NOTE: Perform these steps on your developer machine.
On the Configurations Settings page, find the Selected Configuration form and go to the Packaging Tab:
In this tab, select all the targets you want to apply your model to. You can also opt whether to include the model to the package.
The package size displayed at the bottom of the form changes depending on your selection. If you do not include the model in the package, the archive contains only libraries for selected plugins.
NOTE: Linux is the default target operating system and cannot be changed.
Once you click Pack, the packaging process starts on the server followed by an automatic archive download:
Now you have an archive that contains the required libraries and your model.
IMPORTANT: The archive does not contain your application, and copying the archive to the target device does not mean deployment.
IMPORTANT: The archive contains C++* libraries, so your application can be written in C++ only. A Python* application cannot use these libraries directly and Python bindings are not included in the deployment package. This document does not contain instructions on how to prepare a Python application for deployment.
Your application should be compiled into a binary file. If you do not have an application, see Create Binary Sample. The next step is moving a binary to the target device and deploying it there.
NOTE: Perform these steps on your developer machine.
Install the Intel® Distribution of OpenVINO™ toolkit for Linux* on your development machine. OpenVINO™ toolkit and DL Workbench should be of the same release version.
main.cpp
Create a file named main.cpp
with the source code of your application:
View
main.cpp
CMakeLists.txt
In the same folder as main.cpp
, create a file named CMakeLists.txt
with the following commands to compile main.cpp
into an executable file:
View
CMakeLists.txt
Open a terminal in the directory with main.cpp
and CMakeLists.txt
, and run the following commands to build the sample:
NOTE: Replace
<INSTALL_OPENVINO_DIR>
with the directory you installed the OpenVINO™ package in. By default, the package is installed to/opt/intel/openvino
or~/intel/openvino
.
Once the commands are executed, find the ie_sample
binary in the build
folder in the directory with the source files.
Make sure you have the following components on your developer machine:
ie_sample
for exampleUnarchive the deployment package. Place the binary and model inside the deployment_package
folder as follows:
Then archive the deployment_package
folder and copy it to the target machine.
NOTE: Perform the steps below on your target machine.
deployment_package
folder on the target machine.install_openvino_dependencies.sh
script: bin/setupvars.sh
: NOTE: Replace
<path>
and<model>
with the path to your model and its name respectively.
NOTE: In the command above, the application is run on a CPU device. If you run it on other devices, set the following flags instead of
CPU
:
- Intel® Processor Graphics:
GPU
- Intel® Movidius™ Neural Compute Stick 2 (NCS 2):
MYRIAD
- Intel® Vision Accelerator Design with Intel® Movidius™ VPUs:
HDDL
If you run the application created in the Create Binary Sample, you get the following output: